PROBLEM STATEMENT¶

It is a georgian used car price dataset. I took this data set from kaggle. This data set has 19237 rows and 18 columns.

Our goal is to predict the price of the car based on the features of the car. We will use machine learning to predict the price of the car.

On seeing the data, we can see that the data is not clean. We have to clean the data first.

We have to find the best model for our car price prediction. Our aim to get R2 score of 0.75 or more and MAE less than 5000.

We will use Linear Regression, Support Vector Regressor, Decision tree, Random forest, and Xgboost.

We will fine tune our best models among them and get the final model.

DATA LOADING AND DESCRIPTION¶

In [2]:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import nbconvert
In [4]:
car_data = pd.read_csv("D:Local Disk/Downloads/car_price_prediction.csv")
car_data.sample(5)
Out[4]:
ID Price Levy Manufacturer Model Prod. year Category Leather interior Fuel type Engine volume Mileage Cylinders Gear box type Drive wheels Doors Wheel Color Airbags
12292 45811132 14583 1058 LEXUS RX 450 2012 Jeep Yes Hybrid 3.5 257072 km 6.0 Automatic 4x4 04-May Left wheel Grey 12
11304 45810779 21326 - MITSUBISHI Outlander 2012 Jeep No Petrol 2.4 Turbo 128000 km 4.0 Automatic 4x4 04-May Left wheel Silver 16
15987 45770961 549 585 TOYOTA Prius 2013 Jeep Yes Hybrid 1.8 298248 km 4.0 Automatic Front 04-May Left wheel White 12
8466 45788676 4000 - VOLKSWAGEN Vento 1994 Sedan No Petrol 1.8 182000 km 4.0 Manual Front 04-May Left wheel Blue 0
12864 45801569 7997 - VOLKSWAGEN Passat B5 2003 Sedan Yes Petrol 1.8 Turbo 139000 km 4.0 Tiptronic Front 04-May Left wheel Black 8
In [5]:
print(car_data.info())
print("--"*50)
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 19237 entries, 0 to 19236
Data columns (total 18 columns):
 #   Column            Non-Null Count  Dtype  
---  ------            --------------  -----  
 0   ID                19237 non-null  int64  
 1   Price             19237 non-null  int64  
 2   Levy              19237 non-null  object 
 3   Manufacturer      19237 non-null  object 
 4   Model             19237 non-null  object 
 5   Prod. year        19237 non-null  int64  
 6   Category          19237 non-null  object 
 7   Leather interior  19237 non-null  object 
 8   Fuel type         19237 non-null  object 
 9   Engine volume     19237 non-null  object 
 10  Mileage           19237 non-null  object 
 11  Cylinders         19237 non-null  float64
 12  Gear box type     19237 non-null  object 
 13  Drive wheels      19237 non-null  object 
 14  Doors             19237 non-null  object 
 15  Wheel             19237 non-null  object 
 16  Color             19237 non-null  object 
 17  Airbags           19237 non-null  int64  
dtypes: float64(1), int64(4), object(13)
memory usage: 2.6+ MB
None
----------------------------------------------------------------------------------------------------
In [6]:
print(car_data.describe())
print("--"*50)
print(car_data.columns)
print("--"*50)
print(car_data.shape)
                 ID         Price    Prod. year     Cylinders       Airbags
count  1.923700e+04  1.923700e+04  19237.000000  19237.000000  19237.000000
mean   4.557654e+07  1.855593e+04   2010.912824      4.582991      6.582627
std    9.365914e+05  1.905813e+05      5.668673      1.199933      4.320168
min    2.074688e+07  1.000000e+00   1939.000000      1.000000      0.000000
25%    4.569837e+07  5.331000e+03   2009.000000      4.000000      4.000000
50%    4.577231e+07  1.317200e+04   2012.000000      4.000000      6.000000
75%    4.580204e+07  2.207500e+04   2015.000000      4.000000     12.000000
max    4.581665e+07  2.630750e+07   2020.000000     16.000000     16.000000
----------------------------------------------------------------------------------------------------
Index(['ID', 'Price', 'Levy', 'Manufacturer', 'Model', 'Prod. year',
       'Category', 'Leather interior', 'Fuel type', 'Engine volume', 'Mileage',
       'Cylinders', 'Gear box type', 'Drive wheels', 'Doors', 'Wheel', 'Color',
       'Airbags'],
      dtype='object')
----------------------------------------------------------------------------------------------------
(19237, 18)
In [7]:
car_data.isnull().sum()
Out[7]:
ID                  0
Price               0
Levy                0
Manufacturer        0
Model               0
Prod. year          0
Category            0
Leather interior    0
Fuel type           0
Engine volume       0
Mileage             0
Cylinders           0
Gear box type       0
Drive wheels        0
Doors               0
Wheel               0
Color               0
Airbags             0
dtype: int64

After looking the data, I have found some impurities in the data. These impurities are:

  1. Levy has a lot of row filled with "-".
  2. Price has a very large outlier. we have to delete it.
  3. Engine Volume has some impure data .i.e. 3.5 Turbo . we have to remove that Turbo from there and then convert its dtype to float.
  4. Mileage also has some impurity like 200000 Km , we have to remove Km from there and then convert its dtype to float.
  5. Model and Manufacturer columns have a large number of unique data so we need to bin it . but we will do it later in feature engineering part.
  6. Doors have values like: 04-May and 02-Mar. which is error so we will change them in 04-05 and 02-03.

DATA PREPROCESSING¶

In this part, we are going to do some cleaning of data.

That is, Treatment of problems that we have figured out in our DIAGONOSTIC EDA.

After cleaning them, we will change the dtype of Levy, Mileage and Engine volume to float32.

We will also delete duplicates here.

Cleaning the data¶

In [ ]:
for i in range(19237):
    if car_data["Levy"][i]== "-":
        car_data["Levy"][i]= np.nan
    if "km" in car_data["Mileage"][i]:
        car_data["Mileage"][i]=  car_data["Mileage"][i][:-2]
    if "Turbo" in car_data["Engine volume"][i]:
        car_data["Engine volume"][i]= car_data["Engine volume"][i][:-5]
    if car_data["Doors"][i]=="04-May":
        car_data["Doors"][i]= "04-05"
    elif car_data["Doors"][i]=="02-Mar":
        car_data["Doors"][i]="02-03"

Converting the data types¶

In [9]:
car_data["Mileage"] = car_data["Mileage"].astype('float32')
car_data["Engine volume"] = car_data["Engine volume"].astype('float32')
car_data["Levy"] = car_data["Levy"].astype('float32')
car_data["Airbags"] = car_data["Airbags"].astype('int32')
car_data["Prod. year"] = car_data["Prod. year"].astype('int32')
car_data["Price"] = car_data["Price"].astype('int32')
car_data["ID"] = car_data["ID"].astype('int32')
car_data["Cylinders"] = car_data["Cylinders"].astype('int32')
In [10]:
car_data.dtypes
Out[10]:
ID                    int32
Price                 int32
Levy                float32
Manufacturer         object
Model                object
Prod. year            int32
Category             object
Leather interior     object
Fuel type            object
Engine volume       float32
Mileage             float32
Cylinders             int32
Gear box type        object
Drive wheels         object
Doors                object
Wheel                object
Color                object
Airbags               int32
dtype: object

DROPPING THE DUPLICATES¶

In [11]:
car_data.duplicated().sum()
Out[11]:
np.int64(313)
In [12]:
car_data.drop_duplicates(inplace=True)
In [13]:
car_data.duplicated().sum()
Out[13]:
np.int64(0)

OUTLIER HANDLING¶

So, we have tried a lots of way of handling the outliers. We will discuss here what I faced in different methods of outlier handling. We tried the following methods:

  1. Deleting the outliers using IQR method with bound= bound +- 1.5 IQR:- for this we have to delete about 4500 rows out of 19000 rows. Which is a large amount of data. we have run the model after deleting them and we gained a max score of 0.76 r2 score.
  2. Capping the outliers using IQR Method as above: This method help us in preserving the data. But not so effective as our r2 score fall to about 0.60 r2 score at max.
  3. Deleting the outliers using IQR method with bound= bound +- 3 IQR:- This method had preserved the data as we have to just delete less than 1000 rows and also we preserved our model's max r2 score to 0.755. i found this as a great deal so far.
In [14]:
cols=["Price", "Prod. year",'Mileage',"Levy"]
for col in cols:
    q1=car_data[col].quantile(0.25)
    q3=car_data[col].quantile(0.75)
    IQR=q3-q1
    lower_bound=q1-3*IQR
    upper_bound=q3+3*IQR
    print("Number of Outliers:",car_data[ (car_data[col]<lower_bound) | (car_data[col]>=upper_bound) ].shape[0])
    car_data.drop(car_data[(car_data[col]<lower_bound) | (car_data[col]>=upper_bound)].index, inplace=True)


car_data.drop(car_data[(car_data["Engine volume"]>=7.5)].index, inplace=True)
Number of Outliers: 301
Number of Outliers: 148
Number of Outliers: 203
Number of Outliers: 146

EXPLORATORY DATA ANALYSIS¶

UNIVARIATE ANALYSIS¶

In [15]:
num_col= ['Price',  'Prod. year','Cylinders','Airbags', 'Engine volume', 'Mileage','Levy']
cat_col=[  'Manufacturer', 'Model',
       'Category', 'Leather interior', 'Fuel type',
        'Gear box type', 'Drive wheels', 'Doors', 'Wheel', 'Color']

for col in num_col:
        fig , ax = plt.subplots(1,2,figsize=(12,8))
        sns.histplot(car_data,x=col,ax=ax[0],kde=True)
        sns.boxplot(data=car_data,x=col,ax=ax[1])
        plt.tight_layout()
        plt.show()

for col in cat_col:
        fig , ax = plt.subplots(1,2,figsize=(12,8))
        sns.countplot(data= car_data, x=col,ax=ax[0])
        ax[0].set_title(f"distribution of {col}")
        ax[0].set_xlabel(col)
        ax[0].set_ylabel("count")
        ax[0].tick_params(axis='x', rotation=90)
        counts= car_data[col].value_counts()
        ax[1].pie(counts,labels=counts.index,autopct='%1.1f%%')
        ax[1].set_title(f"distribution of {col}")
        plt.tight_layout()
        plt.show()
        
No description has been provided for this image
No description has been provided for this image
No description has been provided for this image
No description has been provided for this image
No description has been provided for this image
No description has been provided for this image
No description has been provided for this image
No description has been provided for this image
No description has been provided for this image
No description has been provided for this image
No description has been provided for this image
No description has been provided for this image
No description has been provided for this image
No description has been provided for this image
No description has been provided for this image
No description has been provided for this image
No description has been provided for this image

CORRELATION ANALYSIS¶

In [16]:
plt.figure(figsize=(10, 8))
sns.heatmap(car_data.corr(numeric_only=True), annot=True, cmap='coolwarm', fmt='.2f')
plt.show()
No description has been provided for this image

MULTIVARIATE ANALYSIS¶

In [17]:
sns.pairplot(car_data)
Out[17]:
<seaborn.axisgrid.PairGrid at 0x16db450a7b0>
No description has been provided for this image
In [18]:
num_col= ['Prod. year','Cylinders','Airbags', 'Engine volume', 'Mileage','Levy']
cat_col=[  'Manufacturer', 'Model',
       'Category', 'Leather interior', 'Fuel type',
        'Gear box type', 'Drive wheels', 'Doors', 'Wheel', 'Color']

##  plot with mean value of price under that category
for col in cat_col:
        sns.barplot(data= car_data, x= col, y= 'Price', estimator= "mean")
        plt.xticks(rotation=90)
        plt.show()

## regression plot of num col
for col in num_col:
        sns.regplot(data= car_data, x= col, y= 'Price',scatter_kws={'alpha':0.3},line_kws={'color':'red'})
        plt.show()
No description has been provided for this image
No description has been provided for this image
No description has been provided for this image
No description has been provided for this image
No description has been provided for this image
No description has been provided for this image
No description has been provided for this image
No description has been provided for this image
No description has been provided for this image
No description has been provided for this image
No description has been provided for this image
No description has been provided for this image
No description has been provided for this image
No description has been provided for this image
No description has been provided for this image
No description has been provided for this image

CONCLUSION: Finally, we have observed that the price and mileage have lots of outliers and most of the data point are scattered too. So, we may apply transformation on them.

Also, Engine volume has outliers with values 20 which is nearly impossible. we know that the Engine volume can not be greater the 7.5 Turbo.

The prod. year has outliers less than 1980 that you may delete but else are going to be capped to 2000. more intervention needed so far in prod.year. we will see the prod. year and take the action accordingly.

We will remove the ID from the data as this is not going to contribute in predicting the price.

We may bin the manufacturer and model. We have to deal with Models as it has 1590 unique value. Gemini has suggested to try target encoder to it. we will try this in featuren engineering part.

Note: outlier detection is more reliable to domain knowledge than mathematics. don't apply mathematical tools to them unknowingly. go on through domain intutions.

FEATURE ENGINEERING¶

In [19]:
car_data["Age"]=2026-car_data["Prod. year"]
In [20]:
car_data["Mileage_ratio"]=car_data["Mileage"]/car_data["Age"]

MISSING VALUES IMPUTATION¶

We will apply KNN imputer for Levy as it depends on Prod. year, Engine Volume, Cylinders and lit bit on mileage.

In [21]:
from sklearn.impute import KNNImputer
knn_imputer = KNNImputer(missing_values=np.nan,n_neighbors=5,weights='distance',metric='nan_euclidean')

TRANSFORMATIONS¶

In [22]:
from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import FunctionTransformer
from sklearn.preprocessing import OneHotEncoder
from sklearn.preprocessing import TargetEncoder
from sklearn.preprocessing import OrdinalEncoder
from sklearn.pipeline import Pipeline
from sklearn.compose import ColumnTransformer
from sklearn.preprocessing import PowerTransformer
In [23]:
pt = PowerTransformer(method='yeo-johnson')
pipe_pt_std = Pipeline(steps=[('pt', pt), ('scaler', StandardScaler())])
pipe_knn_std= Pipeline(steps=[('scaler', StandardScaler()),('knn_imputer', knn_imputer)])
ohe1=OneHotEncoder(handle_unknown="infrequent_if_exist",min_frequency=0.0025,sparse_output=False)
In [24]:
CT=ColumnTransformer(transformers=[
('ohe',OneHotEncoder(handle_unknown='ignore'),['Category','Fuel type','Color','Gear box type','Drive wheels','Wheel','Doors']),
('pt_std',pt,['Mileage']),('std',StandardScaler(),['Airbags']),
('ode',OrdinalEncoder(),['Leather interior']),('ohe1',ohe1,['Manufacturer']),

('knn_std',pipe_knn_std,['Levy','Prod. year','Cylinders','Engine volume'])
],remainder='passthrough')

#('target_encoder',TargetEncoder(),['Model']),

Feature selection¶

We have tried target encoding to model as suggested by Gemini. Apllying target encoding to Model column had just increased the number of columns and showed very few contribution in predicting the price of the car. Also large no of columns had just slowed the Models learning. So, we finally concluded to drop the column Model.

In [25]:
car_data.drop(columns=['ID','Model'],inplace=True)

MODEL SELECTION¶

In [26]:
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
from sklearn.svm import SVR
from sklearn.tree import DecisionTreeRegressor
from sklearn.ensemble import RandomForestRegressor
from sklearn.metrics import root_mean_squared_error, mean_squared_error, r2_score, mean_absolute_error
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import cross_val_score
from sklearn.compose import TransformedTargetRegressor
from xgboost import XGBRegressor
In [27]:
X=car_data.iloc[:,1:]
Y=car_data.iloc[:,0]
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.2, random_state=42)

APPLY TRANSFORMED TARGET REGRESSOR TO APPLY LOG TRANSFORMATION TO PRICE , RUN THE MODEL TO PREDICT THE PRICE AND THEN EXPONENTIAL THE PREDICTIONS TO GET THE ACTUAL PRICE.

In [28]:
regressors=[('LinearRegression', LinearRegression()),
 ('SVR', SVR()), ('DecisionTreeRegressor', DecisionTreeRegressor()),
  ('RandomForestRegressor', RandomForestRegressor())]

pipelines_TTR={}
for name, regressor in regressors:
    wrapped_regressor=TransformedTargetRegressor(regressor=regressor, 
                                                 func=np.log1p, 
                                                 inverse_func=np.expm1)
    pipelines_TTR[name]=Pipeline([('CT', CT), ('regressor', wrapped_regressor)])
In [29]:
regressors=[('LinearRegression', LinearRegression()),
 ('SVR', SVR()), ('DecisionTreeRegressor', DecisionTreeRegressor()),
  ('RandomForestRegressor', RandomForestRegressor(n_estimators=200))]

pipelines={}
for name, regressor in regressors:
    pipelines[name]=Pipeline([('CT', CT), ('regressor', regressor)])

Linear Regression¶

Scores over Test data¶
In [30]:
pipelines['LinearRegression'].fit(X_train, Y_train)
Y_pred=pipelines['LinearRegression'].predict(X_test)
rmse = root_mean_squared_error(Y_test, Y_pred)
print('RMSE:', rmse)

mse = mean_squared_error(Y_test, Y_pred)
print('MSE:', mse)

r2 = r2_score(Y_test, Y_pred)
print('R2:', r2)

mae = mean_absolute_error(Y_test, Y_pred)
print('MAE:', mae)
RMSE: 11077.642696932806
MSE: 122714167.72090875
R2: 0.37180563484881546
MAE: 8229.317581875053
Scores over train data¶
In [31]:
Y_pred=pipelines['LinearRegression'].predict(X_train)
rmse = root_mean_squared_error(Y_train, Y_pred)
print('RMSE:', rmse)

mse = mean_squared_error(Y_train, Y_pred)
print('MSE:', mse)

r2 = r2_score(Y_train, Y_pred)
print('R2:', r2)

mae = mean_absolute_error(Y_train, Y_pred)
print('MAE:', mae)
RMSE: 11162.638552733857
MSE: 124604499.4589802
R2: 0.3653920887865415
MAE: 8239.450315355973

Linear Regression with Transformed Target Regressor¶

Scores over Test data¶
In [32]:
pipelines_TTR['LinearRegression'].fit(X_train, Y_train)
Y_pred=pipelines_TTR['LinearRegression'].predict(X_test)
rmse = root_mean_squared_error(Y_test, Y_pred)
print('RMSE:', rmse)

mse = mean_squared_error(Y_test, Y_pred)
print('MSE:', mse)

r2 = r2_score(Y_test, Y_pred)
print('R2:', r2)

mae = mean_absolute_error(Y_test, Y_pred)
print('MAE:', mae)
RMSE: 13460.718984370647
MSE: 181190955.57619637
R2: 0.0724531696438232
MAE: 9174.317390105742
Scores over Train data¶
In [33]:
Y_pred=pipelines_TTR['LinearRegression'].predict(X_train)
rmse = root_mean_squared_error(Y_train, Y_pred)
print('RMSE:', rmse)

mse = mean_squared_error(Y_train, Y_pred)
print('MSE:', mse)

r2 = r2_score(Y_train, Y_pred)
print('R2:', r2)

mae = mean_absolute_error(Y_train, Y_pred)
print('MAE:', mae)
RMSE: 13385.623425222911
MSE: 179174914.48187634
R2: 0.08746619331651373
MAE: 9034.70497666385

Support Vector Regressor¶

Scores over Test data¶
In [34]:
pipelines['SVR'].fit(X_train, Y_train)
Y_pred=pipelines['SVR'].predict(X_test)
rmse = root_mean_squared_error(Y_test, Y_pred)
print('RMSE:', rmse)

mse = mean_squared_error(Y_test, Y_pred)
print('MSE:', mse)

r2 = r2_score(Y_test, Y_pred)
print('R2:', r2)

mae = mean_absolute_error(Y_test, Y_pred)
print('MAE:', mae)
RMSE: 14273.917063435198
MSE: 203744708.33382648
R2: -0.043003265896522036
MAE: 10441.421362960755
Scores over Train data¶
In [35]:
pipelines['SVR'].fit(X_train, Y_train)
Y_pred=pipelines['SVR'].predict(X_test)
rmse = root_mean_squared_error(Y_test, Y_pred)
print('RMSE:', rmse)

mse = mean_squared_error(Y_test, Y_pred)
print('MSE:', mse)

r2 = r2_score(Y_test, Y_pred)
print('R2:', r2)

mae = mean_absolute_error(Y_test, Y_pred)
print('MAE:', mae)
RMSE: 14273.917063435198
MSE: 203744708.33382648
R2: -0.043003265896522036
MAE: 10441.421362960755

Support Vector Regressor with Transformed Target Regressor¶

Scores over Test data¶
In [36]:
pipelines_TTR['SVR'].fit(X_train, Y_train)
Y_pred=pipelines_TTR['SVR'].predict(X_test)
rmse = root_mean_squared_error(Y_test, Y_pred)
print('RMSE:', rmse)

mse = mean_squared_error(Y_test, Y_pred)
print('MSE:', mse)

r2 = r2_score(Y_test, Y_pred)
print('R2:', r2)

mae = mean_absolute_error(Y_test, Y_pred)
print('MAE:', mae)
RMSE: 14188.183273546249
MSE: 201304544.60373753
R2: -0.030511659313861905
MAE: 10250.307725984641
Scores over Train data¶
In [37]:
Y_pred=pipelines_TTR['SVR'].predict(X_train)
rmse = root_mean_squared_error(Y_train, Y_pred)
print('RMSE:', rmse)

mse = mean_squared_error(Y_train, Y_pred)
print('MSE:', mse)

r2 = r2_score(Y_train, Y_pred)
print('R2:', r2)

mae = mean_absolute_error(Y_train, Y_pred)
print('MAE:', mae)
RMSE: 14038.818498337037
MSE: 197088424.82925022
R2: -0.003766911694994768
MAE: 10126.559052008595

Random Forest Regressor¶

Scores over Test data¶
In [38]:
pipelines['RandomForestRegressor'].fit(X_train, Y_train)
Y_pred=pipelines['RandomForestRegressor'].predict(X_test)
rmse = root_mean_squared_error(Y_test, Y_pred)
print('RMSE:', rmse)

mse = mean_squared_error(Y_test, Y_pred)
print('MSE:', mse)

r2 = r2_score(Y_test, Y_pred)
print('R2:', r2)

mae = mean_absolute_error(Y_test, Y_pred)
print('MAE:', mae)
RMSE: 6842.880539574347
MSE: 46825014.0788853
R2: 0.7602949150958656
MAE: 3941.2044810689185
Scores over Train data¶
In [39]:
Y_pred=pipelines['RandomForestRegressor'].predict(X_train)
rmse = root_mean_squared_error(Y_train, Y_pred)
print('RMSE:', rmse)

mse = mean_squared_error(Y_train, Y_pred)
print('MSE:', mse)

r2 = r2_score(Y_train, Y_pred)
print('R2:', r2)

mae = mean_absolute_error(Y_train, Y_pred)
print('MAE:', mae)
RMSE: 2913.7699602483835
MSE: 8490055.381245866
R2: 0.9567603390329199
MAE: 1556.0165893482479

Random Forest Regressor with Transformed Target Regressor¶

Scores over Test data¶
In [40]:
pipelines_TTR['RandomForestRegressor'].fit(X_train, Y_train)
Y_pred=pipelines_TTR['RandomForestRegressor'].predict(X_test)
rmse = root_mean_squared_error(Y_test, Y_pred)
print('RMSE:', rmse)

mse = mean_squared_error(Y_test, Y_pred)
print('MSE:', mse)

r2 = r2_score(Y_test, Y_pred)
print('R2:', r2)

mae = mean_absolute_error(Y_test, Y_pred)
print('MAE:', mae)
RMSE: 7991.461446510009
MSE: 63863456.051055856
R2: 0.6730722786500392
MAE: 4451.181711426576
Scores over Train data¶
In [41]:
Y_pred=pipelines_TTR['RandomForestRegressor'].predict(X_train)
rmse = root_mean_squared_error(Y_train, Y_pred)
print('RMSE:', rmse)

mse = mean_squared_error(Y_train, Y_pred)
print('MSE:', mse)

r2 = r2_score(Y_train, Y_pred)
print('R2:', r2)

mae = mean_absolute_error(Y_train, Y_pred)
print('MAE:', mae)
RMSE: 4366.916972725533
MSE: 19069963.846678335
R2: 0.9028771033453677
MAE: 1991.6951852487837

Decision Tree Regressor¶

Scores over Test data¶
In [42]:
pipelines['DecisionTreeRegressor'].fit(X_train, Y_train)
Y_pred=pipelines['DecisionTreeRegressor'].predict(X_test)
rmse = root_mean_squared_error(Y_test, Y_pred)
print('RMSE:', rmse)

mse = mean_squared_error(Y_test, Y_pred)
print('MSE:', mse)

r2 = r2_score(Y_test, Y_pred)
print('R2:', r2)

mae = mean_absolute_error(Y_test, Y_pred)
print('MAE:', mae)
RMSE: 9274.952744901708
MSE: 86024748.42015973
R2: 0.5596249135307878
MAE: 4958.568134227956
Scores over Train data¶
In [43]:
pipelines['DecisionTreeRegressor'].fit(X_train, Y_train)
Y_pred=pipelines['DecisionTreeRegressor'].predict(X_test)
rmse = root_mean_squared_error(Y_test, Y_pred)
print('RMSE:', rmse)

mse = mean_squared_error(Y_test, Y_pred)
print('MSE:', mse)

r2 = r2_score(Y_test, Y_pred)
print('R2:', r2)

mae = mean_absolute_error(Y_test, Y_pred)
print('MAE:', mae)
RMSE: 9310.96375450403
MSE: 86694046.03768778
R2: 0.5561986669958605
MAE: 4999.400658151296

RANDOM FOREST REGRESSOR PERFORM WELL TILL NOW.

Trying XGboost Regressor¶

In [44]:
pipelinexg=Pipeline([('CT', CT), ('regressor', XGBRegressor())])
pipelinexg.fit(X_train,Y_train)
Out[44]:
Pipeline(steps=[('CT',
                 ColumnTransformer(remainder='passthrough',
                                   transformers=[('ohe',
                                                  OneHotEncoder(handle_unknown='ignore'),
                                                  ['Category', 'Fuel type',
                                                   'Color', 'Gear box type',
                                                   'Drive wheels', 'Wheel',
                                                   'Doors']),
                                                 ('pt_std', PowerTransformer(),
                                                  ['Mileage']),
                                                 ('std', StandardScaler(),
                                                  ['Airbags']),
                                                 ('ode', OrdinalEncoder(),
                                                  ['Leather interior']),
                                                 ('ohe1',
                                                  OneHotEncoder(han...
                              feature_types=None, feature_weights=None,
                              gamma=None, grow_policy=None,
                              importance_type=None,
                              interaction_constraints=None, learning_rate=None,
                              max_bin=None, max_cat_threshold=None,
                              max_cat_to_onehot=None, max_delta_step=None,
                              max_depth=None, max_leaves=None,
                              min_child_weight=None, missing=nan,
                              monotone_constraints=None, multi_strategy=None,
                              n_estimators=None, n_jobs=None,
                              num_parallel_tree=None, ...))])
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.
Parameters
steps steps: list of tuples

List of (name of step, estimator) tuples that are to be chained in
sequential order. To be compatible with the scikit-learn API, all steps
must define `fit`. All non-last steps must also define `transform`. See
:ref:`Combining Estimators ` for more details.
[('CT', ...), ('regressor', ...)]
transform_input transform_input: list of str, default=None

The names of the :term:`metadata` parameters that should be transformed by the
pipeline before passing it to the step consuming it.

This enables transforming some input arguments to ``fit`` (other than ``X``)
to be transformed by the steps of the pipeline up to the step which requires
them. Requirement is defined via :ref:`metadata routing `.
For instance, this can be used to pass a validation set through the pipeline.

You can only set this if metadata routing is enabled, which you
can enable using ``sklearn.set_config(enable_metadata_routing=True)``.

.. versionadded:: 1.6
None
memory memory: str or object with the joblib.Memory interface, default=None

Used to cache the fitted transformers of the pipeline. The last step
will never be cached, even if it is a transformer. By default, no
caching is performed. If a string is given, it is the path to the
caching directory. Enabling caching triggers a clone of the transformers
before fitting. Therefore, the transformer instance given to the
pipeline cannot be inspected directly. Use the attribute ``named_steps``
or ``steps`` to inspect estimators within the pipeline. Caching the
transformers is advantageous when fitting is time consuming. See
:ref:`sphx_glr_auto_examples_neighbors_plot_caching_nearest_neighbors.py`
for an example on how to enable caching.
None
verbose verbose: bool, default=False

If True, the time elapsed while fitting each step will be printed as it
is completed.
False
Parameters
transformers transformers: list of tuples

List of (name, transformer, columns) tuples specifying the
transformer objects to be applied to subsets of the data.

name : str
Like in Pipeline and FeatureUnion, this allows the transformer and
its parameters to be set using ``set_params`` and searched in grid
search.
transformer : {'drop', 'passthrough'} or estimator
Estimator must support :term:`fit` and :term:`transform`.
Special-cased strings 'drop' and 'passthrough' are accepted as
well, to indicate to drop the columns or to pass them through
untransformed, respectively.
columns : str, array-like of str, int, array-like of int, array-like of bool, slice or callable
Indexes the data on its second axis. Integers are interpreted as
positional columns, while strings can reference DataFrame columns
by name. A scalar string or int should be used where
``transformer`` expects X to be a 1d array-like (vector),
otherwise a 2d array will be passed to the transformer.
A callable is passed the input data `X` and can return any of the
above. To select multiple columns by name or dtype, you can use
:obj:`make_column_selector`.
[('ohe', ...), ('pt_std', ...), ...]
remainder remainder: {'drop', 'passthrough'} or estimator, default='drop'

By default, only the specified columns in `transformers` are
transformed and combined in the output, and the non-specified
columns are dropped. (default of ``'drop'``).
By specifying ``remainder='passthrough'``, all remaining columns that
were not specified in `transformers`, but present in the data passed
to `fit` will be automatically passed through. This subset of columns
is concatenated with the output of the transformers. For dataframes,
extra columns not seen during `fit` will be excluded from the output
of `transform`.
By setting ``remainder`` to be an estimator, the remaining
non-specified columns will use the ``remainder`` estimator. The
estimator must support :term:`fit` and :term:`transform`.
Note that using this feature requires that the DataFrame columns
input at :term:`fit` and :term:`transform` have identical order.
'passthrough'
sparse_threshold sparse_threshold: float, default=0.3

If the output of the different transformers contains sparse matrices,
these will be stacked as a sparse matrix if the overall density is
lower than this value. Use ``sparse_threshold=0`` to always return
dense. When the transformed output consists of all dense data, the
stacked result will be dense, and this keyword will be ignored.
0.3
n_jobs n_jobs: int, default=None

Number of jobs to run in parallel.
``None`` means 1 unless in a :obj:`joblib.parallel_backend` context.
``-1`` means using all processors. See :term:`Glossary `
for more details.
None
transformer_weights transformer_weights: dict, default=None

Multiplicative weights for features per transformer. The output of the
transformer is multiplied by these weights. Keys are transformer names,
values the weights.
None
verbose verbose: bool, default=False

If True, the time elapsed while fitting each transformer will be
printed as it is completed.
False
verbose_feature_names_out verbose_feature_names_out: bool, str or Callable[[str, str], str], default=True

- If True, :meth:`ColumnTransformer.get_feature_names_out` will prefix
all feature names with the name of the transformer that generated that
feature. It is equivalent to setting
`verbose_feature_names_out="{transformer_name}__{feature_name}"`.
- If False, :meth:`ColumnTransformer.get_feature_names_out` will not
prefix any feature names and will error if feature names are not
unique.
- If ``Callable[[str, str], str]``,
:meth:`ColumnTransformer.get_feature_names_out` will rename all the features
using the name of the transformer. The first argument of the callable is the
transformer name and the second argument is the feature name. The returned
string will be the new feature name.
- If ``str``, it must be a string ready for formatting. The given string will
be formatted using two field names: ``transformer_name`` and ``feature_name``.
e.g. ``"{feature_name}__{transformer_name}"``. See :meth:`str.format` method
from the standard library for more info.

.. versionadded:: 1.0

.. versionchanged:: 1.6
`verbose_feature_names_out` can be a callable or a string to be formatted.
True
force_int_remainder_cols force_int_remainder_cols: bool, default=False

This parameter has no effect.

.. note::
If you do not access the list of columns for the remainder columns
in the `transformers_` fitted attribute, you do not need to set
this parameter.

.. versionadded:: 1.5

.. versionchanged:: 1.7
The default value for `force_int_remainder_cols` will change from
`True` to `False` in version 1.7.

.. deprecated:: 1.7
`force_int_remainder_cols` is deprecated and will be removed in 1.9.
'deprecated'
['Category', 'Fuel type', 'Color', 'Gear box type', 'Drive wheels', 'Wheel', 'Doors']
Parameters
categories categories: 'auto' or a list of array-like, default='auto'

Categories (unique values) per feature:

- 'auto' : Determine categories automatically from the training data.
- list : ``categories[i]`` holds the categories expected in the ith
column. The passed categories should not mix strings and numeric
values within a single feature, and should be sorted in case of
numeric values.

The used categories can be found in the ``categories_`` attribute.

.. versionadded:: 0.20
'auto'
drop drop: {'first', 'if_binary'} or an array-like of shape (n_features,), default=None

Specifies a methodology to use to drop one of the categories per
feature. This is useful in situations where perfectly collinear
features cause problems, such as when feeding the resulting data
into an unregularized linear regression model.

However, dropping one category breaks the symmetry of the original
representation and can therefore induce a bias in downstream models,
for instance for penalized linear classification or regression models.

- None : retain all features (the default).
- 'first' : drop the first category in each feature. If only one
category is present, the feature will be dropped entirely.
- 'if_binary' : drop the first category in each feature with two
categories. Features with 1 or more than 2 categories are
left intact.
- array : ``drop[i]`` is the category in feature ``X[:, i]`` that
should be dropped.

When `max_categories` or `min_frequency` is configured to group
infrequent categories, the dropping behavior is handled after the
grouping.

.. versionadded:: 0.21
The parameter `drop` was added in 0.21.

.. versionchanged:: 0.23
The option `drop='if_binary'` was added in 0.23.

.. versionchanged:: 1.1
Support for dropping infrequent categories.
None
sparse_output sparse_output: bool, default=True

When ``True``, it returns a :class:`scipy.sparse.csr_matrix`,
i.e. a sparse matrix in "Compressed Sparse Row" (CSR) format.

.. versionadded:: 1.2
`sparse` was renamed to `sparse_output`
True
dtype dtype: number type, default=np.float64

Desired dtype of output.
<class 'numpy.float64'>
handle_unknown handle_unknown: {'error', 'ignore', 'infrequent_if_exist', 'warn'}, default='error'

Specifies the way unknown categories are handled during :meth:`transform`.

- 'error' : Raise an error if an unknown category is present during transform.
- 'ignore' : When an unknown category is encountered during
transform, the resulting one-hot encoded columns for this feature
will be all zeros. In the inverse transform, an unknown category
will be denoted as None.
- 'infrequent_if_exist' : When an unknown category is encountered
during transform, the resulting one-hot encoded columns for this
feature will map to the infrequent category if it exists. The
infrequent category will be mapped to the last position in the
encoding. During inverse transform, an unknown category will be
mapped to the category denoted `'infrequent'` if it exists. If the
`'infrequent'` category does not exist, then :meth:`transform` and
:meth:`inverse_transform` will handle an unknown category as with
`handle_unknown='ignore'`. Infrequent categories exist based on
`min_frequency` and `max_categories`. Read more in the
:ref:`User Guide `.
- 'warn' : When an unknown category is encountered during transform
a warning is issued, and the encoding then proceeds as described for
`handle_unknown="infrequent_if_exist"`.

.. versionchanged:: 1.1
`'infrequent_if_exist'` was added to automatically handle unknown
categories and infrequent categories.

.. versionadded:: 1.6
The option `"warn"` was added in 1.6.
'ignore'
min_frequency min_frequency: int or float, default=None

Specifies the minimum frequency below which a category will be
considered infrequent.

- If `int`, categories with a smaller cardinality will be considered
infrequent.

- If `float`, categories with a smaller cardinality than
`min_frequency * n_samples` will be considered infrequent.

.. versionadded:: 1.1
Read more in the :ref:`User Guide `.
None
max_categories max_categories: int, default=None

Specifies an upper limit to the number of output features for each input
feature when considering infrequent categories. If there are infrequent
categories, `max_categories` includes the category representing the
infrequent categories along with the frequent categories. If `None`,
there is no limit to the number of output features.

.. versionadded:: 1.1
Read more in the :ref:`User Guide `.
None
feature_name_combiner feature_name_combiner: "concat" or callable, default="concat"

Callable with signature `def callable(input_feature, category)` that returns a
string. This is used to create feature names to be returned by
:meth:`get_feature_names_out`.

`"concat"` concatenates encoded feature name and category with
`feature + "_" + str(category)`.E.g. feature X with values 1, 6, 7 create
feature names `X_1, X_6, X_7`.

.. versionadded:: 1.3
'concat'
['Mileage']
Parameters
method method: {'yeo-johnson', 'box-cox'}, default='yeo-johnson'

The power transform method. Available methods are:

- 'yeo-johnson' [1]_, works with positive and negative values
- 'box-cox' [2]_, only works with strictly positive values
'yeo-johnson'
standardize standardize: bool, default=True

Set to True to apply zero-mean, unit-variance normalization to the
transformed output.
True
copy copy: bool, default=True

Set to False to perform inplace computation during transformation.
True
['Airbags']
Parameters
copy copy: bool, default=True

If False, try to avoid a copy and do inplace scaling instead.
This is not guaranteed to always work inplace; e.g. if the data is
not a NumPy array or scipy.sparse CSR matrix, a copy may still be
returned.
True
with_mean with_mean: bool, default=True

If True, center the data before scaling.
This does not work (and will raise an exception) when attempted on
sparse matrices, because centering them entails building a dense
matrix which in common use cases is likely to be too large to fit in
memory.
True
with_std with_std: bool, default=True

If True, scale the data to unit variance (or equivalently,
unit standard deviation).
True
['Leather interior']
Parameters
categories categories: 'auto' or a list of array-like, default='auto'

Categories (unique values) per feature:

- 'auto' : Determine categories automatically from the training data.
- list : ``categories[i]`` holds the categories expected in the ith
column. The passed categories should not mix strings and numeric
values, and should be sorted in case of numeric values.

The used categories can be found in the ``categories_`` attribute.
'auto'
dtype dtype: number type, default=np.float64

Desired dtype of output.
<class 'numpy.float64'>
handle_unknown handle_unknown: {'error', 'use_encoded_value'}, default='error'

When set to 'error' an error will be raised in case an unknown
categorical feature is present during transform. When set to
'use_encoded_value', the encoded value of unknown categories will be
set to the value given for the parameter `unknown_value`. In
:meth:`inverse_transform`, an unknown category will be denoted as None.

.. versionadded:: 0.24
'error'
unknown_value unknown_value: int or np.nan, default=None

When the parameter handle_unknown is set to 'use_encoded_value', this
parameter is required and will set the encoded value of unknown
categories. It has to be distinct from the values used to encode any of
the categories in `fit`. If set to np.nan, the `dtype` parameter must
be a float dtype.

.. versionadded:: 0.24
None
encoded_missing_value encoded_missing_value: int or np.nan, default=np.nan

Encoded value of missing categories. If set to `np.nan`, then the `dtype`
parameter must be a float dtype.

.. versionadded:: 1.1
nan
min_frequency min_frequency: int or float, default=None

Specifies the minimum frequency below which a category will be
considered infrequent.

- If `int`, categories with a smaller cardinality will be considered
infrequent.

- If `float`, categories with a smaller cardinality than
`min_frequency * n_samples` will be considered infrequent.

.. versionadded:: 1.3
Read more in the :ref:`User Guide `.
None
max_categories max_categories: int, default=None

Specifies an upper limit to the number of output categories for each input
feature when considering infrequent categories. If there are infrequent
categories, `max_categories` includes the category representing the
infrequent categories along with the frequent categories. If `None`,
there is no limit to the number of output features.

`max_categories` do **not** take into account missing or unknown
categories. Setting `unknown_value` or `encoded_missing_value` to an
integer will increase the number of unique integer codes by one each.
This can result in up to `max_categories + 2` integer codes.

.. versionadded:: 1.3
Read more in the :ref:`User Guide `.
None
['Manufacturer']
Parameters
categories categories: 'auto' or a list of array-like, default='auto'

Categories (unique values) per feature:

- 'auto' : Determine categories automatically from the training data.
- list : ``categories[i]`` holds the categories expected in the ith
column. The passed categories should not mix strings and numeric
values within a single feature, and should be sorted in case of
numeric values.

The used categories can be found in the ``categories_`` attribute.

.. versionadded:: 0.20
'auto'
drop drop: {'first', 'if_binary'} or an array-like of shape (n_features,), default=None

Specifies a methodology to use to drop one of the categories per
feature. This is useful in situations where perfectly collinear
features cause problems, such as when feeding the resulting data
into an unregularized linear regression model.

However, dropping one category breaks the symmetry of the original
representation and can therefore induce a bias in downstream models,
for instance for penalized linear classification or regression models.

- None : retain all features (the default).
- 'first' : drop the first category in each feature. If only one
category is present, the feature will be dropped entirely.
- 'if_binary' : drop the first category in each feature with two
categories. Features with 1 or more than 2 categories are
left intact.
- array : ``drop[i]`` is the category in feature ``X[:, i]`` that
should be dropped.

When `max_categories` or `min_frequency` is configured to group
infrequent categories, the dropping behavior is handled after the
grouping.

.. versionadded:: 0.21
The parameter `drop` was added in 0.21.

.. versionchanged:: 0.23
The option `drop='if_binary'` was added in 0.23.

.. versionchanged:: 1.1
Support for dropping infrequent categories.
None
sparse_output sparse_output: bool, default=True

When ``True``, it returns a :class:`scipy.sparse.csr_matrix`,
i.e. a sparse matrix in "Compressed Sparse Row" (CSR) format.

.. versionadded:: 1.2
`sparse` was renamed to `sparse_output`
False
dtype dtype: number type, default=np.float64

Desired dtype of output.
<class 'numpy.float64'>
handle_unknown handle_unknown: {'error', 'ignore', 'infrequent_if_exist', 'warn'}, default='error'

Specifies the way unknown categories are handled during :meth:`transform`.

- 'error' : Raise an error if an unknown category is present during transform.
- 'ignore' : When an unknown category is encountered during
transform, the resulting one-hot encoded columns for this feature
will be all zeros. In the inverse transform, an unknown category
will be denoted as None.
- 'infrequent_if_exist' : When an unknown category is encountered
during transform, the resulting one-hot encoded columns for this
feature will map to the infrequent category if it exists. The
infrequent category will be mapped to the last position in the
encoding. During inverse transform, an unknown category will be
mapped to the category denoted `'infrequent'` if it exists. If the
`'infrequent'` category does not exist, then :meth:`transform` and
:meth:`inverse_transform` will handle an unknown category as with
`handle_unknown='ignore'`. Infrequent categories exist based on
`min_frequency` and `max_categories`. Read more in the
:ref:`User Guide `.
- 'warn' : When an unknown category is encountered during transform
a warning is issued, and the encoding then proceeds as described for
`handle_unknown="infrequent_if_exist"`.

.. versionchanged:: 1.1
`'infrequent_if_exist'` was added to automatically handle unknown
categories and infrequent categories.

.. versionadded:: 1.6
The option `"warn"` was added in 1.6.
'infrequent_if_exist'
min_frequency min_frequency: int or float, default=None

Specifies the minimum frequency below which a category will be
considered infrequent.

- If `int`, categories with a smaller cardinality will be considered
infrequent.

- If `float`, categories with a smaller cardinality than
`min_frequency * n_samples` will be considered infrequent.

.. versionadded:: 1.1
Read more in the :ref:`User Guide `.
0.0025
max_categories max_categories: int, default=None

Specifies an upper limit to the number of output features for each input
feature when considering infrequent categories. If there are infrequent
categories, `max_categories` includes the category representing the
infrequent categories along with the frequent categories. If `None`,
there is no limit to the number of output features.

.. versionadded:: 1.1
Read more in the :ref:`User Guide `.
None
feature_name_combiner feature_name_combiner: "concat" or callable, default="concat"

Callable with signature `def callable(input_feature, category)` that returns a
string. This is used to create feature names to be returned by
:meth:`get_feature_names_out`.

`"concat"` concatenates encoded feature name and category with
`feature + "_" + str(category)`.E.g. feature X with values 1, 6, 7 create
feature names `X_1, X_6, X_7`.

.. versionadded:: 1.3
'concat'
['Levy', 'Prod. year', 'Cylinders', 'Engine volume']
Parameters
copy copy: bool, default=True

If False, try to avoid a copy and do inplace scaling instead.
This is not guaranteed to always work inplace; e.g. if the data is
not a NumPy array or scipy.sparse CSR matrix, a copy may still be
returned.
True
with_mean with_mean: bool, default=True

If True, center the data before scaling.
This does not work (and will raise an exception) when attempted on
sparse matrices, because centering them entails building a dense
matrix which in common use cases is likely to be too large to fit in
memory.
True
with_std with_std: bool, default=True

If True, scale the data to unit variance (or equivalently,
unit standard deviation).
True
Parameters
missing_values missing_values: int, float, str, np.nan or None, default=np.nan

The placeholder for the missing values. All occurrences of
`missing_values` will be imputed. For pandas' dataframes with
nullable integer dtypes with missing values, `missing_values`
should be set to np.nan, since `pd.NA` will be converted to np.nan.
nan
n_neighbors n_neighbors: int, default=5

Number of neighboring samples to use for imputation.
5
weights weights: {'uniform', 'distance'} or callable, default='uniform'

Weight function used in prediction. Possible values:

- 'uniform' : uniform weights. All points in each neighborhood are
weighted equally.
- 'distance' : weight points by the inverse of their distance.
in this case, closer neighbors of a query point will have a
greater influence than neighbors which are further away.
- callable : a user-defined function which accepts an
array of distances, and returns an array of the same shape
containing the weights.
'distance'
metric metric: {'nan_euclidean'} or callable, default='nan_euclidean'

Distance metric for searching neighbors. Possible values:

- 'nan_euclidean'
- callable : a user-defined function which conforms to the definition
of ``func_metric(x, y, *, missing_values=np.nan)``. `x` and `y`
corresponds to a row (i.e. 1-D arrays) of `X` and `Y`, respectively.
The callable should returns a scalar distance value.
'nan_euclidean'
copy copy: bool, default=True

If True, a copy of X will be created. If False, imputation will
be done in-place whenever possible.
True
add_indicator add_indicator: bool, default=False

If True, a :class:`MissingIndicator` transform will stack onto the
output of the imputer's transform. This allows a predictive estimator
to account for missingness despite imputation. If a feature has no
missing values at fit/train time, the feature won't appear on the
missing indicator even if there are missing values at transform/test
time.
False
keep_empty_features keep_empty_features: bool, default=False

If True, features that consist exclusively of missing values when
`fit` is called are returned in results when `transform` is called.
The imputed value is always `0`.

.. versionadded:: 1.2
False
['Age', 'Mileage_ratio']
passthrough
Parameters
objective objective: typing.Union[str, xgboost.sklearn._SklObjWProto, typing.Callable[[typing.Any, typing.Any], typing.Tuple[numpy.ndarray, numpy.ndarray]], NoneType]

Specify the learning task and the corresponding learning objective or a custom
objective function to be used.

For custom objective, see :doc:`/tutorials/custom_metric_obj` and
:ref:`custom-obj-metric` for more information, along with the end note for
function signatures.
'reg:squarederror'
base_score base_score: typing.Union[float, typing.List[float], NoneType]

The initial prediction score of all instances, global bias.
None
booster None
callbacks callbacks: typing.Optional[typing.List[xgboost.callback.TrainingCallback]]

List of callback functions that are applied at end of each iteration.
It is possible to use predefined callbacks by using
:ref:`Callback API `.

.. note::

States in callback are not preserved during training, which means callback
objects can not be reused for multiple training sessions without
reinitialization or deepcopy.

.. code-block:: python

for params in parameters_grid:
# be sure to (re)initialize the callbacks before each run
callbacks = [xgb.callback.LearningRateScheduler(custom_rates)]
reg = xgboost.XGBRegressor(**params, callbacks=callbacks)
reg.fit(X, y)
None
colsample_bylevel colsample_bylevel: typing.Optional[float]

Subsample ratio of columns for each level.
None
colsample_bynode colsample_bynode: typing.Optional[float]

Subsample ratio of columns for each split.
None
colsample_bytree colsample_bytree: typing.Optional[float]

Subsample ratio of columns when constructing each tree.
None
device device: typing.Optional[str]

.. versionadded:: 2.0.0

Device ordinal, available options are `cpu`, `cuda`, and `gpu`.
None
early_stopping_rounds early_stopping_rounds: typing.Optional[int]

.. versionadded:: 1.6.0

- Activates early stopping. Validation metric needs to improve at least once in
every **early_stopping_rounds** round(s) to continue training. Requires at
least one item in **eval_set** in :py:meth:`fit`.

- If early stopping occurs, the model will have two additional attributes:
:py:attr:`best_score` and :py:attr:`best_iteration`. These are used by the
:py:meth:`predict` and :py:meth:`apply` methods to determine the optimal
number of trees during inference. If users want to access the full model
(including trees built after early stopping), they can specify the
`iteration_range` in these inference methods. In addition, other utilities
like model plotting can also use the entire model.

- If you prefer to discard the trees after `best_iteration`, consider using the
callback function :py:class:`xgboost.callback.EarlyStopping`.

- If there's more than one item in **eval_set**, the last entry will be used for
early stopping. If there's more than one metric in **eval_metric**, the last
metric will be used for early stopping.
None
enable_categorical enable_categorical: bool

See the same parameter of :py:class:`DMatrix` for details.
False
eval_metric eval_metric: typing.Union[str, typing.List[typing.Union[str, typing.Callable]], typing.Callable, NoneType]

.. versionadded:: 1.6.0

Metric used for monitoring the training result and early stopping. It can be a
string or list of strings as names of predefined metric in XGBoost (See
:doc:`/parameter`), one of the metrics in :py:mod:`sklearn.metrics`, or any
other user defined metric that looks like `sklearn.metrics`.

If custom objective is also provided, then custom metric should implement the
corresponding reverse link function.

Unlike the `scoring` parameter commonly used in scikit-learn, when a callable
object is provided, it's assumed to be a cost function and by default XGBoost
will minimize the result during early stopping.

For advanced usage on Early stopping like directly choosing to maximize instead
of minimize, see :py:obj:`xgboost.callback.EarlyStopping`.

See :doc:`/tutorials/custom_metric_obj` and :ref:`custom-obj-metric` for more
information.

.. code-block:: python

from sklearn.datasets import load_diabetes
from sklearn.metrics import mean_absolute_error
X, y = load_diabetes(return_X_y=True)
reg = xgb.XGBRegressor(
tree_method="hist",
eval_metric=mean_absolute_error,
)
reg.fit(X, y, eval_set=[(X, y)])
None
feature_types feature_types: typing.Optional[typing.Sequence[str]]

.. versionadded:: 1.7.0

Used for specifying feature types without constructing a dataframe. See
the :py:class:`DMatrix` for details.
None
feature_weights feature_weights: Optional[ArrayLike]

Weight for each feature, defines the probability of each feature being selected
when colsample is being used. All values must be greater than 0, otherwise a
`ValueError` is thrown.
None
gamma gamma: typing.Optional[float]

(min_split_loss) Minimum loss reduction required to make a further partition on
a leaf node of the tree.
None
grow_policy grow_policy: typing.Optional[str]

Tree growing policy.

- depthwise: Favors splitting at nodes closest to the node,
- lossguide: Favors splitting at nodes with highest loss change.
None
importance_type None
interaction_constraints interaction_constraints: typing.Union[str, typing.List[typing.Tuple[str]], NoneType]

Constraints for interaction representing permitted interactions. The
constraints must be specified in the form of a nested list, e.g. ``[[0, 1], [2,
3, 4]]``, where each inner list is a group of indices of features that are
allowed to interact with each other. See :doc:`tutorial
` for more information
None
learning_rate learning_rate: typing.Optional[float]

Boosting learning rate (xgb's "eta")
None
max_bin max_bin: typing.Optional[int]

If using histogram-based algorithm, maximum number of bins per feature
None
max_cat_threshold max_cat_threshold: typing.Optional[int]

.. versionadded:: 1.7.0

.. note:: This parameter is experimental

Maximum number of categories considered for each split. Used only by
partition-based splits for preventing over-fitting. Also, `enable_categorical`
needs to be set to have categorical feature support. See :doc:`Categorical Data
` and :ref:`cat-param` for details.
None
max_cat_to_onehot max_cat_to_onehot: Optional[int]

.. versionadded:: 1.6.0

.. note:: This parameter is experimental

A threshold for deciding whether XGBoost should use one-hot encoding based split
for categorical data. When number of categories is lesser than the threshold
then one-hot encoding is chosen, otherwise the categories will be partitioned
into children nodes. Also, `enable_categorical` needs to be set to have
categorical feature support. See :doc:`Categorical Data
` and :ref:`cat-param` for details.
None
max_delta_step max_delta_step: typing.Optional[float]

Maximum delta step we allow each tree's weight estimation to be.
None
max_depth max_depth: typing.Optional[int]

Maximum tree depth for base learners.
None
max_leaves max_leaves: typing.Optional[int]

Maximum number of leaves; 0 indicates no limit.
None
min_child_weight min_child_weight: typing.Optional[float]

Minimum sum of instance weight(hessian) needed in a child.
None
missing missing: float

Value in the data which needs to be present as a missing value. Default to
:py:data:`numpy.nan`.
nan
monotone_constraints monotone_constraints: typing.Union[typing.Dict[str, int], str, NoneType]

Constraint of variable monotonicity. See :doc:`tutorial `
for more information.
None
multi_strategy multi_strategy: typing.Optional[str]

.. versionadded:: 2.0.0

.. note:: This parameter is working-in-progress.

The strategy used for training multi-target models, including multi-target
regression and multi-class classification. See :doc:`/tutorials/multioutput` for
more information.

- ``one_output_per_tree``: One model for each target.
- ``multi_output_tree``: Use multi-target trees.
None
n_estimators n_estimators: typing.Optional[int]

Number of gradient boosted trees. Equivalent to number of boosting
rounds.
None
n_jobs n_jobs: typing.Optional[int]

Number of parallel threads used to run xgboost. When used with other
Scikit-Learn algorithms like grid search, you may choose which algorithm to
parallelize and balance the threads. Creating thread contention will
significantly slow down both algorithms.
None
num_parallel_tree None
random_state random_state: typing.Union[numpy.random.mtrand.RandomState, numpy.random._generator.Generator, int, NoneType]

Random number seed.

.. note::

Using gblinear booster with shotgun updater is nondeterministic as
it uses Hogwild algorithm.
None
reg_alpha reg_alpha: typing.Optional[float]

L1 regularization term on weights (xgb's alpha).
None
reg_lambda reg_lambda: typing.Optional[float]

L2 regularization term on weights (xgb's lambda).
None
sampling_method sampling_method: typing.Optional[str]

Sampling method. Used only by the GPU version of ``hist`` tree method.

- ``uniform``: Select random training instances uniformly.
- ``gradient_based``: Select random training instances with higher probability
when the gradient and hessian are larger. (cf. CatBoost)
None
scale_pos_weight scale_pos_weight: typing.Optional[float]

Balancing of positive and negative weights.
None
subsample subsample: typing.Optional[float]

Subsample ratio of the training instance.
None
tree_method tree_method: typing.Optional[str]

Specify which tree method to use. Default to auto. If this parameter is set to
default, XGBoost will choose the most conservative option available. It's
recommended to study this option from the parameters document :doc:`tree method
`
None
validate_parameters validate_parameters: typing.Optional[bool]

Give warnings for unknown parameter.
None
verbosity verbosity: typing.Optional[int]

The degree of verbosity. Valid values are 0 (silent) - 3 (debug).
None
Scores over Test data¶
In [45]:
Y_pred=pipelinexg.predict(X_test)
rmse = root_mean_squared_error(Y_test, Y_pred)
print('RMSE:', rmse)

mse = mean_squared_error(Y_test, Y_pred)
print('MSE:', mse)

r2 = r2_score(Y_test, Y_pred)
print('R2:', r2)

mae = mean_absolute_error(Y_test, Y_pred)
print('MAE:', mae)
RMSE: 7277.00390625
MSE: 52954788.0
R2: 0.7289155721664429
MAE: 4598.037109375
Scores over Train data¶
In [46]:
Y_pred=pipelinexg.predict(X_train)
rmse = root_mean_squared_error(Y_train, Y_pred)
print('RMSE:', rmse)

mse = mean_squared_error(Y_train, Y_pred)
print('MSE:', mse)

r2 = r2_score(Y_train, Y_pred)
print('R2:', r2)

mae = mean_absolute_error(Y_train, Y_pred)
print('MAE:', mae)
RMSE: 4908.55859375
MSE: 24093948.0
R2: 0.8772900700569153
MAE: 3282.20361328125

Cross validation score of random forest regressor.¶

In [47]:
from sklearn.model_selection import cross_val_score
csv=cross_val_score(pipelines['RandomForestRegressor'],X_train,Y_train,cv=10,scoring='r2')
print(csv)
print(csv.mean())
print(csv.std())
[0.73510116 0.70363619 0.75136473 0.78741029 0.7424472  0.77696562
 0.79375917 0.75387093 0.74511952 0.76423827]
0.7553913078283899
0.025343430550887548

Cross val score of XG boost¶

In [48]:
from sklearn.model_selection import cross_val_score
csv=cross_val_score(pipelinexg,X_train,Y_train,cv=10,scoring='r2')
print(csv)
print(csv.mean())
print(csv.std())
[0.71987677 0.67856419 0.73137218 0.75571334 0.73942471 0.75173986
 0.768534   0.73092258 0.7376138  0.74939322]
0.7363154649734497
0.023438354397858685

CONCLUSION:

WE HAVE FOUND TWO MODELS THAT PERFORMED REALLY WELL ON OUR CAR_DATA DATASET. THESE ARE FOLLOWING:

  1. RANDOM FOREST REGRESSOR with cross val score mean (0.755)
  2. XG BOOST REGRESSOR with corss val score mean (0.736)

HYPERPARAMETER TUNING OF BEST PERFORMED MODEL TILL NOW¶

Random Forest Regressor¶

In [119]:
pipelines['RandomForestRegressor'][1]
Out[119]:
RandomForestRegressor(n_estimators=200)
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.
Parameters
n_estimators n_estimators: int, default=100

The number of trees in the forest.

.. versionchanged:: 0.22
The default value of ``n_estimators`` changed from 10 to 100
in 0.22.
200
criterion criterion: {"squared_error", "absolute_error", "friedman_mse", "poisson"}, default="squared_error"

The function to measure the quality of a split. Supported criteria
are "squared_error" for the mean squared error, which is equal to
variance reduction as feature selection criterion and minimizes the L2
loss using the mean of each terminal node, "friedman_mse", which uses
mean squared error with Friedman's improvement score for potential
splits, "absolute_error" for the mean absolute error, which minimizes
the L1 loss using the median of each terminal node, and "poisson" which
uses reduction in Poisson deviance to find splits.
Training using "absolute_error" is significantly slower
than when using "squared_error".

.. versionadded:: 0.18
Mean Absolute Error (MAE) criterion.

.. versionadded:: 1.0
Poisson criterion.
'squared_error'
max_depth max_depth: int, default=None

The maximum depth of the tree. If None, then nodes are expanded until
all leaves are pure or until all leaves contain less than
min_samples_split samples.
None
min_samples_split min_samples_split: int or float, default=2

The minimum number of samples required to split an internal node:

- If int, then consider `min_samples_split` as the minimum number.
- If float, then `min_samples_split` is a fraction and
`ceil(min_samples_split * n_samples)` are the minimum
number of samples for each split.

.. versionchanged:: 0.18
Added float values for fractions.
2
min_samples_leaf min_samples_leaf: int or float, default=1

The minimum number of samples required to be at a leaf node.
A split point at any depth will only be considered if it leaves at
least ``min_samples_leaf`` training samples in each of the left and
right branches. This may have the effect of smoothing the model,
especially in regression.

- If int, then consider `min_samples_leaf` as the minimum number.
- If float, then `min_samples_leaf` is a fraction and
`ceil(min_samples_leaf * n_samples)` are the minimum
number of samples for each node.

.. versionchanged:: 0.18
Added float values for fractions.
1
min_weight_fraction_leaf min_weight_fraction_leaf: float, default=0.0

The minimum weighted fraction of the sum total of weights (of all
the input samples) required to be at a leaf node. Samples have
equal weight when sample_weight is not provided.
0.0
max_features max_features: {"sqrt", "log2", None}, int or float, default=1.0

The number of features to consider when looking for the best split:

- If int, then consider `max_features` features at each split.
- If float, then `max_features` is a fraction and
`max(1, int(max_features * n_features_in_))` features are considered at each
split.
- If "sqrt", then `max_features=sqrt(n_features)`.
- If "log2", then `max_features=log2(n_features)`.
- If None or 1.0, then `max_features=n_features`.

.. note::
The default of 1.0 is equivalent to bagged trees and more
randomness can be achieved by setting smaller values, e.g. 0.3.

.. versionchanged:: 1.1
The default of `max_features` changed from `"auto"` to 1.0.

Note: the search for a split does not stop until at least one
valid partition of the node samples is found, even if it requires to
effectively inspect more than ``max_features`` features.
1.0
max_leaf_nodes max_leaf_nodes: int, default=None

Grow trees with ``max_leaf_nodes`` in best-first fashion.
Best nodes are defined as relative reduction in impurity.
If None then unlimited number of leaf nodes.
None
min_impurity_decrease min_impurity_decrease: float, default=0.0

A node will be split if this split induces a decrease of the impurity
greater than or equal to this value.

The weighted impurity decrease equation is the following::

N_t / N * (impurity - N_t_R / N_t * right_impurity
- N_t_L / N_t * left_impurity)

where ``N`` is the total number of samples, ``N_t`` is the number of
samples at the current node, ``N_t_L`` is the number of samples in the
left child, and ``N_t_R`` is the number of samples in the right child.

``N``, ``N_t``, ``N_t_R`` and ``N_t_L`` all refer to the weighted sum,
if ``sample_weight`` is passed.

.. versionadded:: 0.19
0.0
bootstrap bootstrap: bool, default=True

Whether bootstrap samples are used when building trees. If False, the
whole dataset is used to build each tree.
True
oob_score oob_score: bool or callable, default=False

Whether to use out-of-bag samples to estimate the generalization score.
By default, :func:`~sklearn.metrics.r2_score` is used.
Provide a callable with signature `metric(y_true, y_pred)` to use a
custom metric. Only available if `bootstrap=True`.

For an illustration of out-of-bag (OOB) error estimation, see the example
:ref:`sphx_glr_auto_examples_ensemble_plot_ensemble_oob.py`.
False
n_jobs n_jobs: int, default=None

The number of jobs to run in parallel. :meth:`fit`, :meth:`predict`,
:meth:`decision_path` and :meth:`apply` are all parallelized over the
trees. ``None`` means 1 unless in a :obj:`joblib.parallel_backend`
context. ``-1`` means using all processors. See :term:`Glossary
` for more details.
None
random_state random_state: int, RandomState instance or None, default=None

Controls both the randomness of the bootstrapping of the samples used
when building trees (if ``bootstrap=True``) and the sampling of the
features to consider when looking for the best split at each node
(if ``max_features < n_features``).
See :term:`Glossary ` for details.
None
verbose verbose: int, default=0

Controls the verbosity when fitting and predicting.
0
warm_start warm_start: bool, default=False

When set to ``True``, reuse the solution of the previous call to fit
and add more estimators to the ensemble, otherwise, just fit a whole
new forest. See :term:`Glossary ` and
:ref:`tree_ensemble_warm_start` for details.
False
ccp_alpha ccp_alpha: non-negative float, default=0.0

Complexity parameter used for Minimal Cost-Complexity Pruning. The
subtree with the largest cost complexity that is smaller than
``ccp_alpha`` will be chosen. By default, no pruning is performed. See
:ref:`minimal_cost_complexity_pruning` for details. See
:ref:`sphx_glr_auto_examples_tree_plot_cost_complexity_pruning.py`
for an example of such pruning.

.. versionadded:: 0.22
0.0
max_samples max_samples: int or float, default=None

If bootstrap is True, the number of samples to draw from X
to train each base estimator.

- If None (default), then draw `X.shape[0]` samples.
- If int, then draw `max_samples` samples.
- If float, then draw `max(round(n_samples * max_samples), 1)` samples. Thus,
`max_samples` should be in the interval `(0.0, 1.0]`.

.. versionadded:: 0.22
None
monotonic_cst monotonic_cst: array-like of int of shape (n_features), default=None

Indicates the monotonicity constraint to enforce on each feature.
- 1: monotonically increasing
- 0: no constraint
- -1: monotonically decreasing

If monotonic_cst is None, no constraints are applied.

Monotonicity constraints are not supported for:
- multioutput regressions (i.e. when `n_outputs_ > 1`),
- regressions trained on data with missing values.

Read more in the :ref:`User Guide `.

.. versionadded:: 1.4
None
In [133]:
params={"regressor__n_estimators":[450,500],
"regressor__max_depth":[24,36,42],
"regressor__min_samples_leaf":[2]}
In [134]:
gridsearch=GridSearchCV(pipelines['RandomForestRegressor'],param_grid=params,cv=5,scoring='r2',verbose=2).fit(X_train,Y_train)
gridsearch
Fitting 5 folds for each of 6 candidates, totalling 30 fits
[CV] END regressor__max_depth=24, regressor__min_samples_leaf=2, regressor__n_estimators=450; total time=  35.4s
[CV] END regressor__max_depth=24, regressor__min_samples_leaf=2, regressor__n_estimators=450; total time=  35.3s
[CV] END regressor__max_depth=24, regressor__min_samples_leaf=2, regressor__n_estimators=450; total time=  36.0s
[CV] END regressor__max_depth=24, regressor__min_samples_leaf=2, regressor__n_estimators=450; total time=  32.8s
[CV] END regressor__max_depth=24, regressor__min_samples_leaf=2, regressor__n_estimators=450; total time=  32.2s
[CV] END regressor__max_depth=24, regressor__min_samples_leaf=2, regressor__n_estimators=500; total time=  36.3s
[CV] END regressor__max_depth=24, regressor__min_samples_leaf=2, regressor__n_estimators=500; total time=  39.2s
[CV] END regressor__max_depth=24, regressor__min_samples_leaf=2, regressor__n_estimators=500; total time=  36.3s
[CV] END regressor__max_depth=24, regressor__min_samples_leaf=2, regressor__n_estimators=500; total time=  33.7s
[CV] END regressor__max_depth=24, regressor__min_samples_leaf=2, regressor__n_estimators=500; total time=  33.1s
[CV] END regressor__max_depth=36, regressor__min_samples_leaf=2, regressor__n_estimators=450; total time=  30.2s
[CV] END regressor__max_depth=36, regressor__min_samples_leaf=2, regressor__n_estimators=450; total time=  30.6s
[CV] END regressor__max_depth=36, regressor__min_samples_leaf=2, regressor__n_estimators=450; total time=  31.2s
[CV] END regressor__max_depth=36, regressor__min_samples_leaf=2, regressor__n_estimators=450; total time=  30.5s
[CV] END regressor__max_depth=36, regressor__min_samples_leaf=2, regressor__n_estimators=450; total time=  30.2s
[CV] END regressor__max_depth=36, regressor__min_samples_leaf=2, regressor__n_estimators=500; total time=  33.4s
[CV] END regressor__max_depth=36, regressor__min_samples_leaf=2, regressor__n_estimators=500; total time=  33.8s
[CV] END regressor__max_depth=36, regressor__min_samples_leaf=2, regressor__n_estimators=500; total time=  33.7s
[CV] END regressor__max_depth=36, regressor__min_samples_leaf=2, regressor__n_estimators=500; total time=  33.3s
[CV] END regressor__max_depth=36, regressor__min_samples_leaf=2, regressor__n_estimators=500; total time=  33.4s
[CV] END regressor__max_depth=42, regressor__min_samples_leaf=2, regressor__n_estimators=450; total time=  30.3s
[CV] END regressor__max_depth=42, regressor__min_samples_leaf=2, regressor__n_estimators=450; total time=  30.8s
[CV] END regressor__max_depth=42, regressor__min_samples_leaf=2, regressor__n_estimators=450; total time=  30.5s
[CV] END regressor__max_depth=42, regressor__min_samples_leaf=2, regressor__n_estimators=450; total time=  30.3s
[CV] END regressor__max_depth=42, regressor__min_samples_leaf=2, regressor__n_estimators=450; total time=  30.0s
[CV] END regressor__max_depth=42, regressor__min_samples_leaf=2, regressor__n_estimators=500; total time=  33.4s
[CV] END regressor__max_depth=42, regressor__min_samples_leaf=2, regressor__n_estimators=500; total time=  33.8s
[CV] END regressor__max_depth=42, regressor__min_samples_leaf=2, regressor__n_estimators=500; total time=  33.6s
[CV] END regressor__max_depth=42, regressor__min_samples_leaf=2, regressor__n_estimators=500; total time=  33.3s
[CV] END regressor__max_depth=42, regressor__min_samples_leaf=2, regressor__n_estimators=500; total time=  33.8s
Out[134]:
GridSearchCV(cv=5,
             estimator=Pipeline(steps=[('CT',
                                        ColumnTransformer(remainder='passthrough',
                                                          transformers=[('ohe',
                                                                         OneHotEncoder(handle_unknown='ignore'),
                                                                         ['Category',
                                                                          'Fuel '
                                                                          'type',
                                                                          'Color',
                                                                          'Gear '
                                                                          'box '
                                                                          'type',
                                                                          'Drive '
                                                                          'wheels',
                                                                          'Wheel',
                                                                          'Doors']),
                                                                        ('pt_std',
                                                                         PowerTransformer(),
                                                                         ['Mileage']),
                                                                        ('std',
                                                                         StandardScaler(),
                                                                         ['Airbags']),
                                                                        ('ode',
                                                                         OrdinalEncoder(),
                                                                         ['Leather '...
                                                                        ('knn_std',
                                                                         Pipeline(steps=[('scaler',
                                                                                          StandardScaler()),
                                                                                         ('knn_imputer',
                                                                                          KNNImputer(weights='distance'))]),
                                                                         ['Levy',
                                                                          'Prod. '
                                                                          'year',
                                                                          'Cylinders',
                                                                          'Engine '
                                                                          'volume'])])),
                                       ('regressor',
                                        RandomForestRegressor(n_estimators=200))]),
             param_grid={'regressor__max_depth': [24, 36, 42],
                         'regressor__min_samples_leaf': [2],
                         'regressor__n_estimators': [450, 500]},
             scoring='r2', verbose=2)
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.
Parameters
estimator estimator: estimator object

This is assumed to implement the scikit-learn estimator interface.
Either estimator needs to provide a ``score`` function,
or ``scoring`` must be passed.
Pipeline(step...mators=200))])
param_grid param_grid: dict or list of dictionaries

Dictionary with parameters names (`str`) as keys and lists of
parameter settings to try as values, or a list of such
dictionaries, in which case the grids spanned by each dictionary
in the list are explored. This enables searching over any sequence
of parameter settings.
{'regressor__max_depth': [24, 36, ...], 'regressor__min_samples_leaf': [2], 'regressor__n_estimators': [450, 500]}
scoring scoring: str, callable, list, tuple or dict, default=None

Strategy to evaluate the performance of the cross-validated model on
the test set.

If `scoring` represents a single score, one can use:

- a single string (see :ref:`scoring_string_names`);
- a callable (see :ref:`scoring_callable`) that returns a single value;
- `None`, the `estimator`'s
:ref:`default evaluation criterion ` is used.

If `scoring` represents multiple scores, one can use:

- a list or tuple of unique strings;
- a callable returning a dictionary where the keys are the metric
names and the values are the metric scores;
- a dictionary with metric names as keys and callables as values.

See :ref:`multimetric_grid_search` for an example.
'r2'
n_jobs n_jobs: int, default=None

Number of jobs to run in parallel.
``None`` means 1 unless in a :obj:`joblib.parallel_backend` context.
``-1`` means using all processors. See :term:`Glossary `
for more details.

.. versionchanged:: v0.20
`n_jobs` default changed from 1 to None
None
refit refit: bool, str, or callable, default=True

Refit an estimator using the best found parameters on the whole
dataset.

For multiple metric evaluation, this needs to be a `str` denoting the
scorer that would be used to find the best parameters for refitting
the estimator at the end.

Where there are considerations other than maximum score in
choosing a best estimator, ``refit`` can be set to a function which
returns the selected ``best_index_`` given ``cv_results_``. In that
case, the ``best_estimator_`` and ``best_params_`` will be set
according to the returned ``best_index_`` while the ``best_score_``
attribute will not be available.

The refitted estimator is made available at the ``best_estimator_``
attribute and permits using ``predict`` directly on this
``GridSearchCV`` instance.

Also for multiple metric evaluation, the attributes ``best_index_``,
``best_score_`` and ``best_params_`` will only be available if
``refit`` is set and all of them will be determined w.r.t this specific
scorer.

See ``scoring`` parameter to know more about multiple metric
evaluation.

See :ref:`sphx_glr_auto_examples_model_selection_plot_grid_search_digits.py`
to see how to design a custom selection strategy using a callable
via `refit`.

See :ref:`this example
`
for an example of how to use ``refit=callable`` to balance model
complexity and cross-validated score.

.. versionchanged:: 0.20
Support for callable added.
True
cv cv: int, cross-validation generator or an iterable, default=None

Determines the cross-validation splitting strategy.
Possible inputs for cv are:

- None, to use the default 5-fold cross validation,
- integer, to specify the number of folds in a `(Stratified)KFold`,
- :term:`CV splitter`,
- An iterable yielding (train, test) splits as arrays of indices.

For integer/None inputs, if the estimator is a classifier and ``y`` is
either binary or multiclass, :class:`StratifiedKFold` is used. In all
other cases, :class:`KFold` is used. These splitters are instantiated
with `shuffle=False` so the splits will be the same across calls.

Refer :ref:`User Guide ` for the various
cross-validation strategies that can be used here.

.. versionchanged:: 0.22
``cv`` default value if None changed from 3-fold to 5-fold.
5
verbose verbose: int

Controls the verbosity: the higher, the more messages.

- >1 : the computation time for each fold and parameter candidate is
displayed;
- >2 : the score is also displayed;
- >3 : the fold and candidate parameter indexes are also displayed
together with the starting time of the computation.
2
pre_dispatch pre_dispatch: int, or str, default='2*n_jobs'

Controls the number of jobs that get dispatched during parallel
execution. Reducing this number can be useful to avoid an
explosion of memory consumption when more jobs get dispatched
than CPUs can process. This parameter can be:

- None, in which case all the jobs are immediately created and spawned. Use
this for lightweight and fast-running jobs, to avoid delays due to on-demand
spawning of the jobs
- An int, giving the exact number of total jobs that are spawned
- A str, giving an expression as a function of n_jobs, as in '2*n_jobs'
'2*n_jobs'
error_score error_score: 'raise' or numeric, default=np.nan

Value to assign to the score if an error occurs in estimator fitting.
If set to 'raise', the error is raised. If a numeric value is given,
FitFailedWarning is raised. This parameter does not affect the refit
step, which will always raise the error.
nan
return_train_score return_train_score: bool, default=False

If ``False``, the ``cv_results_`` attribute will not include training
scores.
Computing training scores is used to get insights on how different
parameter settings impact the overfitting/underfitting trade-off.
However computing the scores on the training set can be computationally
expensive and is not strictly required to select the parameters that
yield the best generalization performance.

.. versionadded:: 0.19

.. versionchanged:: 0.21
Default value was changed from ``True`` to ``False``
False
Parameters
transformers transformers: list of tuples

List of (name, transformer, columns) tuples specifying the
transformer objects to be applied to subsets of the data.

name : str
Like in Pipeline and FeatureUnion, this allows the transformer and
its parameters to be set using ``set_params`` and searched in grid
search.
transformer : {'drop', 'passthrough'} or estimator
Estimator must support :term:`fit` and :term:`transform`.
Special-cased strings 'drop' and 'passthrough' are accepted as
well, to indicate to drop the columns or to pass them through
untransformed, respectively.
columns : str, array-like of str, int, array-like of int, array-like of bool, slice or callable
Indexes the data on its second axis. Integers are interpreted as
positional columns, while strings can reference DataFrame columns
by name. A scalar string or int should be used where
``transformer`` expects X to be a 1d array-like (vector),
otherwise a 2d array will be passed to the transformer.
A callable is passed the input data `X` and can return any of the
above. To select multiple columns by name or dtype, you can use
:obj:`make_column_selector`.
[('ohe', ...), ('pt_std', ...), ...]
remainder remainder: {'drop', 'passthrough'} or estimator, default='drop'

By default, only the specified columns in `transformers` are
transformed and combined in the output, and the non-specified
columns are dropped. (default of ``'drop'``).
By specifying ``remainder='passthrough'``, all remaining columns that
were not specified in `transformers`, but present in the data passed
to `fit` will be automatically passed through. This subset of columns
is concatenated with the output of the transformers. For dataframes,
extra columns not seen during `fit` will be excluded from the output
of `transform`.
By setting ``remainder`` to be an estimator, the remaining
non-specified columns will use the ``remainder`` estimator. The
estimator must support :term:`fit` and :term:`transform`.
Note that using this feature requires that the DataFrame columns
input at :term:`fit` and :term:`transform` have identical order.
'passthrough'
sparse_threshold sparse_threshold: float, default=0.3

If the output of the different transformers contains sparse matrices,
these will be stacked as a sparse matrix if the overall density is
lower than this value. Use ``sparse_threshold=0`` to always return
dense. When the transformed output consists of all dense data, the
stacked result will be dense, and this keyword will be ignored.
0.3
n_jobs n_jobs: int, default=None

Number of jobs to run in parallel.
``None`` means 1 unless in a :obj:`joblib.parallel_backend` context.
``-1`` means using all processors. See :term:`Glossary `
for more details.
None
transformer_weights transformer_weights: dict, default=None

Multiplicative weights for features per transformer. The output of the
transformer is multiplied by these weights. Keys are transformer names,
values the weights.
None
verbose verbose: bool, default=False

If True, the time elapsed while fitting each transformer will be
printed as it is completed.
False
verbose_feature_names_out verbose_feature_names_out: bool, str or Callable[[str, str], str], default=True

- If True, :meth:`ColumnTransformer.get_feature_names_out` will prefix
all feature names with the name of the transformer that generated that
feature. It is equivalent to setting
`verbose_feature_names_out="{transformer_name}__{feature_name}"`.
- If False, :meth:`ColumnTransformer.get_feature_names_out` will not
prefix any feature names and will error if feature names are not
unique.
- If ``Callable[[str, str], str]``,
:meth:`ColumnTransformer.get_feature_names_out` will rename all the features
using the name of the transformer. The first argument of the callable is the
transformer name and the second argument is the feature name. The returned
string will be the new feature name.
- If ``str``, it must be a string ready for formatting. The given string will
be formatted using two field names: ``transformer_name`` and ``feature_name``.
e.g. ``"{feature_name}__{transformer_name}"``. See :meth:`str.format` method
from the standard library for more info.

.. versionadded:: 1.0

.. versionchanged:: 1.6
`verbose_feature_names_out` can be a callable or a string to be formatted.
True
force_int_remainder_cols force_int_remainder_cols: bool, default=False

This parameter has no effect.

.. note::
If you do not access the list of columns for the remainder columns
in the `transformers_` fitted attribute, you do not need to set
this parameter.

.. versionadded:: 1.5

.. versionchanged:: 1.7
The default value for `force_int_remainder_cols` will change from
`True` to `False` in version 1.7.

.. deprecated:: 1.7
`force_int_remainder_cols` is deprecated and will be removed in 1.9.
'deprecated'
['Category', 'Fuel type', 'Color', 'Gear box type', 'Drive wheels', 'Wheel', 'Doors']
Parameters
categories categories: 'auto' or a list of array-like, default='auto'

Categories (unique values) per feature:

- 'auto' : Determine categories automatically from the training data.
- list : ``categories[i]`` holds the categories expected in the ith
column. The passed categories should not mix strings and numeric
values within a single feature, and should be sorted in case of
numeric values.

The used categories can be found in the ``categories_`` attribute.

.. versionadded:: 0.20
'auto'
drop drop: {'first', 'if_binary'} or an array-like of shape (n_features,), default=None

Specifies a methodology to use to drop one of the categories per
feature. This is useful in situations where perfectly collinear
features cause problems, such as when feeding the resulting data
into an unregularized linear regression model.

However, dropping one category breaks the symmetry of the original
representation and can therefore induce a bias in downstream models,
for instance for penalized linear classification or regression models.

- None : retain all features (the default).
- 'first' : drop the first category in each feature. If only one
category is present, the feature will be dropped entirely.
- 'if_binary' : drop the first category in each feature with two
categories. Features with 1 or more than 2 categories are
left intact.
- array : ``drop[i]`` is the category in feature ``X[:, i]`` that
should be dropped.

When `max_categories` or `min_frequency` is configured to group
infrequent categories, the dropping behavior is handled after the
grouping.

.. versionadded:: 0.21
The parameter `drop` was added in 0.21.

.. versionchanged:: 0.23
The option `drop='if_binary'` was added in 0.23.

.. versionchanged:: 1.1
Support for dropping infrequent categories.
None
sparse_output sparse_output: bool, default=True

When ``True``, it returns a :class:`scipy.sparse.csr_matrix`,
i.e. a sparse matrix in "Compressed Sparse Row" (CSR) format.

.. versionadded:: 1.2
`sparse` was renamed to `sparse_output`
True
dtype dtype: number type, default=np.float64

Desired dtype of output.
<class 'numpy.float64'>
handle_unknown handle_unknown: {'error', 'ignore', 'infrequent_if_exist', 'warn'}, default='error'

Specifies the way unknown categories are handled during :meth:`transform`.

- 'error' : Raise an error if an unknown category is present during transform.
- 'ignore' : When an unknown category is encountered during
transform, the resulting one-hot encoded columns for this feature
will be all zeros. In the inverse transform, an unknown category
will be denoted as None.
- 'infrequent_if_exist' : When an unknown category is encountered
during transform, the resulting one-hot encoded columns for this
feature will map to the infrequent category if it exists. The
infrequent category will be mapped to the last position in the
encoding. During inverse transform, an unknown category will be
mapped to the category denoted `'infrequent'` if it exists. If the
`'infrequent'` category does not exist, then :meth:`transform` and
:meth:`inverse_transform` will handle an unknown category as with
`handle_unknown='ignore'`. Infrequent categories exist based on
`min_frequency` and `max_categories`. Read more in the
:ref:`User Guide `.
- 'warn' : When an unknown category is encountered during transform
a warning is issued, and the encoding then proceeds as described for
`handle_unknown="infrequent_if_exist"`.

.. versionchanged:: 1.1
`'infrequent_if_exist'` was added to automatically handle unknown
categories and infrequent categories.

.. versionadded:: 1.6
The option `"warn"` was added in 1.6.
'ignore'
min_frequency min_frequency: int or float, default=None

Specifies the minimum frequency below which a category will be
considered infrequent.

- If `int`, categories with a smaller cardinality will be considered
infrequent.

- If `float`, categories with a smaller cardinality than
`min_frequency * n_samples` will be considered infrequent.

.. versionadded:: 1.1
Read more in the :ref:`User Guide `.
None
max_categories max_categories: int, default=None

Specifies an upper limit to the number of output features for each input
feature when considering infrequent categories. If there are infrequent
categories, `max_categories` includes the category representing the
infrequent categories along with the frequent categories. If `None`,
there is no limit to the number of output features.

.. versionadded:: 1.1
Read more in the :ref:`User Guide `.
None
feature_name_combiner feature_name_combiner: "concat" or callable, default="concat"

Callable with signature `def callable(input_feature, category)` that returns a
string. This is used to create feature names to be returned by
:meth:`get_feature_names_out`.

`"concat"` concatenates encoded feature name and category with
`feature + "_" + str(category)`.E.g. feature X with values 1, 6, 7 create
feature names `X_1, X_6, X_7`.

.. versionadded:: 1.3
'concat'
['Mileage']
Parameters
method method: {'yeo-johnson', 'box-cox'}, default='yeo-johnson'

The power transform method. Available methods are:

- 'yeo-johnson' [1]_, works with positive and negative values
- 'box-cox' [2]_, only works with strictly positive values
'yeo-johnson'
standardize standardize: bool, default=True

Set to True to apply zero-mean, unit-variance normalization to the
transformed output.
True
copy copy: bool, default=True

Set to False to perform inplace computation during transformation.
True
['Airbags']
Parameters
copy copy: bool, default=True

If False, try to avoid a copy and do inplace scaling instead.
This is not guaranteed to always work inplace; e.g. if the data is
not a NumPy array or scipy.sparse CSR matrix, a copy may still be
returned.
True
with_mean with_mean: bool, default=True

If True, center the data before scaling.
This does not work (and will raise an exception) when attempted on
sparse matrices, because centering them entails building a dense
matrix which in common use cases is likely to be too large to fit in
memory.
True
with_std with_std: bool, default=True

If True, scale the data to unit variance (or equivalently,
unit standard deviation).
True
['Leather interior']
Parameters
categories categories: 'auto' or a list of array-like, default='auto'

Categories (unique values) per feature:

- 'auto' : Determine categories automatically from the training data.
- list : ``categories[i]`` holds the categories expected in the ith
column. The passed categories should not mix strings and numeric
values, and should be sorted in case of numeric values.

The used categories can be found in the ``categories_`` attribute.
'auto'
dtype dtype: number type, default=np.float64

Desired dtype of output.
<class 'numpy.float64'>
handle_unknown handle_unknown: {'error', 'use_encoded_value'}, default='error'

When set to 'error' an error will be raised in case an unknown
categorical feature is present during transform. When set to
'use_encoded_value', the encoded value of unknown categories will be
set to the value given for the parameter `unknown_value`. In
:meth:`inverse_transform`, an unknown category will be denoted as None.

.. versionadded:: 0.24
'error'
unknown_value unknown_value: int or np.nan, default=None

When the parameter handle_unknown is set to 'use_encoded_value', this
parameter is required and will set the encoded value of unknown
categories. It has to be distinct from the values used to encode any of
the categories in `fit`. If set to np.nan, the `dtype` parameter must
be a float dtype.

.. versionadded:: 0.24
None
encoded_missing_value encoded_missing_value: int or np.nan, default=np.nan

Encoded value of missing categories. If set to `np.nan`, then the `dtype`
parameter must be a float dtype.

.. versionadded:: 1.1
nan
min_frequency min_frequency: int or float, default=None

Specifies the minimum frequency below which a category will be
considered infrequent.

- If `int`, categories with a smaller cardinality will be considered
infrequent.

- If `float`, categories with a smaller cardinality than
`min_frequency * n_samples` will be considered infrequent.

.. versionadded:: 1.3
Read more in the :ref:`User Guide `.
None
max_categories max_categories: int, default=None

Specifies an upper limit to the number of output categories for each input
feature when considering infrequent categories. If there are infrequent
categories, `max_categories` includes the category representing the
infrequent categories along with the frequent categories. If `None`,
there is no limit to the number of output features.

`max_categories` do **not** take into account missing or unknown
categories. Setting `unknown_value` or `encoded_missing_value` to an
integer will increase the number of unique integer codes by one each.
This can result in up to `max_categories + 2` integer codes.

.. versionadded:: 1.3
Read more in the :ref:`User Guide `.
None
['Manufacturer']
Parameters
categories categories: 'auto' or a list of array-like, default='auto'

Categories (unique values) per feature:

- 'auto' : Determine categories automatically from the training data.
- list : ``categories[i]`` holds the categories expected in the ith
column. The passed categories should not mix strings and numeric
values within a single feature, and should be sorted in case of
numeric values.

The used categories can be found in the ``categories_`` attribute.

.. versionadded:: 0.20
'auto'
drop drop: {'first', 'if_binary'} or an array-like of shape (n_features,), default=None

Specifies a methodology to use to drop one of the categories per
feature. This is useful in situations where perfectly collinear
features cause problems, such as when feeding the resulting data
into an unregularized linear regression model.

However, dropping one category breaks the symmetry of the original
representation and can therefore induce a bias in downstream models,
for instance for penalized linear classification or regression models.

- None : retain all features (the default).
- 'first' : drop the first category in each feature. If only one
category is present, the feature will be dropped entirely.
- 'if_binary' : drop the first category in each feature with two
categories. Features with 1 or more than 2 categories are
left intact.
- array : ``drop[i]`` is the category in feature ``X[:, i]`` that
should be dropped.

When `max_categories` or `min_frequency` is configured to group
infrequent categories, the dropping behavior is handled after the
grouping.

.. versionadded:: 0.21
The parameter `drop` was added in 0.21.

.. versionchanged:: 0.23
The option `drop='if_binary'` was added in 0.23.

.. versionchanged:: 1.1
Support for dropping infrequent categories.
None
sparse_output sparse_output: bool, default=True

When ``True``, it returns a :class:`scipy.sparse.csr_matrix`,
i.e. a sparse matrix in "Compressed Sparse Row" (CSR) format.

.. versionadded:: 1.2
`sparse` was renamed to `sparse_output`
False
dtype dtype: number type, default=np.float64

Desired dtype of output.
<class 'numpy.float64'>
handle_unknown handle_unknown: {'error', 'ignore', 'infrequent_if_exist', 'warn'}, default='error'

Specifies the way unknown categories are handled during :meth:`transform`.

- 'error' : Raise an error if an unknown category is present during transform.
- 'ignore' : When an unknown category is encountered during
transform, the resulting one-hot encoded columns for this feature
will be all zeros. In the inverse transform, an unknown category
will be denoted as None.
- 'infrequent_if_exist' : When an unknown category is encountered
during transform, the resulting one-hot encoded columns for this
feature will map to the infrequent category if it exists. The
infrequent category will be mapped to the last position in the
encoding. During inverse transform, an unknown category will be
mapped to the category denoted `'infrequent'` if it exists. If the
`'infrequent'` category does not exist, then :meth:`transform` and
:meth:`inverse_transform` will handle an unknown category as with
`handle_unknown='ignore'`. Infrequent categories exist based on
`min_frequency` and `max_categories`. Read more in the
:ref:`User Guide `.
- 'warn' : When an unknown category is encountered during transform
a warning is issued, and the encoding then proceeds as described for
`handle_unknown="infrequent_if_exist"`.

.. versionchanged:: 1.1
`'infrequent_if_exist'` was added to automatically handle unknown
categories and infrequent categories.

.. versionadded:: 1.6
The option `"warn"` was added in 1.6.
'infrequent_if_exist'
min_frequency min_frequency: int or float, default=None

Specifies the minimum frequency below which a category will be
considered infrequent.

- If `int`, categories with a smaller cardinality will be considered
infrequent.

- If `float`, categories with a smaller cardinality than
`min_frequency * n_samples` will be considered infrequent.

.. versionadded:: 1.1
Read more in the :ref:`User Guide `.
0.0025
max_categories max_categories: int, default=None

Specifies an upper limit to the number of output features for each input
feature when considering infrequent categories. If there are infrequent
categories, `max_categories` includes the category representing the
infrequent categories along with the frequent categories. If `None`,
there is no limit to the number of output features.

.. versionadded:: 1.1
Read more in the :ref:`User Guide `.
None
feature_name_combiner feature_name_combiner: "concat" or callable, default="concat"

Callable with signature `def callable(input_feature, category)` that returns a
string. This is used to create feature names to be returned by
:meth:`get_feature_names_out`.

`"concat"` concatenates encoded feature name and category with
`feature + "_" + str(category)`.E.g. feature X with values 1, 6, 7 create
feature names `X_1, X_6, X_7`.

.. versionadded:: 1.3
'concat'
['Levy', 'Prod. year', 'Cylinders', 'Engine volume']
Parameters
copy copy: bool, default=True

If False, try to avoid a copy and do inplace scaling instead.
This is not guaranteed to always work inplace; e.g. if the data is
not a NumPy array or scipy.sparse CSR matrix, a copy may still be
returned.
True
with_mean with_mean: bool, default=True

If True, center the data before scaling.
This does not work (and will raise an exception) when attempted on
sparse matrices, because centering them entails building a dense
matrix which in common use cases is likely to be too large to fit in
memory.
True
with_std with_std: bool, default=True

If True, scale the data to unit variance (or equivalently,
unit standard deviation).
True
Parameters
missing_values missing_values: int, float, str, np.nan or None, default=np.nan

The placeholder for the missing values. All occurrences of
`missing_values` will be imputed. For pandas' dataframes with
nullable integer dtypes with missing values, `missing_values`
should be set to np.nan, since `pd.NA` will be converted to np.nan.
nan
n_neighbors n_neighbors: int, default=5

Number of neighboring samples to use for imputation.
5
weights weights: {'uniform', 'distance'} or callable, default='uniform'

Weight function used in prediction. Possible values:

- 'uniform' : uniform weights. All points in each neighborhood are
weighted equally.
- 'distance' : weight points by the inverse of their distance.
in this case, closer neighbors of a query point will have a
greater influence than neighbors which are further away.
- callable : a user-defined function which accepts an
array of distances, and returns an array of the same shape
containing the weights.
'distance'
metric metric: {'nan_euclidean'} or callable, default='nan_euclidean'

Distance metric for searching neighbors. Possible values:

- 'nan_euclidean'
- callable : a user-defined function which conforms to the definition
of ``func_metric(x, y, *, missing_values=np.nan)``. `x` and `y`
corresponds to a row (i.e. 1-D arrays) of `X` and `Y`, respectively.
The callable should returns a scalar distance value.
'nan_euclidean'
copy copy: bool, default=True

If True, a copy of X will be created. If False, imputation will
be done in-place whenever possible.
True
add_indicator add_indicator: bool, default=False

If True, a :class:`MissingIndicator` transform will stack onto the
output of the imputer's transform. This allows a predictive estimator
to account for missingness despite imputation. If a feature has no
missing values at fit/train time, the feature won't appear on the
missing indicator even if there are missing values at transform/test
time.
False
keep_empty_features keep_empty_features: bool, default=False

If True, features that consist exclusively of missing values when
`fit` is called are returned in results when `transform` is called.
The imputed value is always `0`.

.. versionadded:: 1.2
False
['Age', 'Mileage_ratio']
passthrough
Parameters
n_estimators n_estimators: int, default=100

The number of trees in the forest.

.. versionchanged:: 0.22
The default value of ``n_estimators`` changed from 10 to 100
in 0.22.
500
criterion criterion: {"squared_error", "absolute_error", "friedman_mse", "poisson"}, default="squared_error"

The function to measure the quality of a split. Supported criteria
are "squared_error" for the mean squared error, which is equal to
variance reduction as feature selection criterion and minimizes the L2
loss using the mean of each terminal node, "friedman_mse", which uses
mean squared error with Friedman's improvement score for potential
splits, "absolute_error" for the mean absolute error, which minimizes
the L1 loss using the median of each terminal node, and "poisson" which
uses reduction in Poisson deviance to find splits.
Training using "absolute_error" is significantly slower
than when using "squared_error".

.. versionadded:: 0.18
Mean Absolute Error (MAE) criterion.

.. versionadded:: 1.0
Poisson criterion.
'squared_error'
max_depth max_depth: int, default=None

The maximum depth of the tree. If None, then nodes are expanded until
all leaves are pure or until all leaves contain less than
min_samples_split samples.
42
min_samples_split min_samples_split: int or float, default=2

The minimum number of samples required to split an internal node:

- If int, then consider `min_samples_split` as the minimum number.
- If float, then `min_samples_split` is a fraction and
`ceil(min_samples_split * n_samples)` are the minimum
number of samples for each split.

.. versionchanged:: 0.18
Added float values for fractions.
2
min_samples_leaf min_samples_leaf: int or float, default=1

The minimum number of samples required to be at a leaf node.
A split point at any depth will only be considered if it leaves at
least ``min_samples_leaf`` training samples in each of the left and
right branches. This may have the effect of smoothing the model,
especially in regression.

- If int, then consider `min_samples_leaf` as the minimum number.
- If float, then `min_samples_leaf` is a fraction and
`ceil(min_samples_leaf * n_samples)` are the minimum
number of samples for each node.

.. versionchanged:: 0.18
Added float values for fractions.
2
min_weight_fraction_leaf min_weight_fraction_leaf: float, default=0.0

The minimum weighted fraction of the sum total of weights (of all
the input samples) required to be at a leaf node. Samples have
equal weight when sample_weight is not provided.
0.0
max_features max_features: {"sqrt", "log2", None}, int or float, default=1.0

The number of features to consider when looking for the best split:

- If int, then consider `max_features` features at each split.
- If float, then `max_features` is a fraction and
`max(1, int(max_features * n_features_in_))` features are considered at each
split.
- If "sqrt", then `max_features=sqrt(n_features)`.
- If "log2", then `max_features=log2(n_features)`.
- If None or 1.0, then `max_features=n_features`.

.. note::
The default of 1.0 is equivalent to bagged trees and more
randomness can be achieved by setting smaller values, e.g. 0.3.

.. versionchanged:: 1.1
The default of `max_features` changed from `"auto"` to 1.0.

Note: the search for a split does not stop until at least one
valid partition of the node samples is found, even if it requires to
effectively inspect more than ``max_features`` features.
1.0
max_leaf_nodes max_leaf_nodes: int, default=None

Grow trees with ``max_leaf_nodes`` in best-first fashion.
Best nodes are defined as relative reduction in impurity.
If None then unlimited number of leaf nodes.
None
min_impurity_decrease min_impurity_decrease: float, default=0.0

A node will be split if this split induces a decrease of the impurity
greater than or equal to this value.

The weighted impurity decrease equation is the following::

N_t / N * (impurity - N_t_R / N_t * right_impurity
- N_t_L / N_t * left_impurity)

where ``N`` is the total number of samples, ``N_t`` is the number of
samples at the current node, ``N_t_L`` is the number of samples in the
left child, and ``N_t_R`` is the number of samples in the right child.

``N``, ``N_t``, ``N_t_R`` and ``N_t_L`` all refer to the weighted sum,
if ``sample_weight`` is passed.

.. versionadded:: 0.19
0.0
bootstrap bootstrap: bool, default=True

Whether bootstrap samples are used when building trees. If False, the
whole dataset is used to build each tree.
True
oob_score oob_score: bool or callable, default=False

Whether to use out-of-bag samples to estimate the generalization score.
By default, :func:`~sklearn.metrics.r2_score` is used.
Provide a callable with signature `metric(y_true, y_pred)` to use a
custom metric. Only available if `bootstrap=True`.

For an illustration of out-of-bag (OOB) error estimation, see the example
:ref:`sphx_glr_auto_examples_ensemble_plot_ensemble_oob.py`.
False
n_jobs n_jobs: int, default=None

The number of jobs to run in parallel. :meth:`fit`, :meth:`predict`,
:meth:`decision_path` and :meth:`apply` are all parallelized over the
trees. ``None`` means 1 unless in a :obj:`joblib.parallel_backend`
context. ``-1`` means using all processors. See :term:`Glossary
` for more details.
None
random_state random_state: int, RandomState instance or None, default=None

Controls both the randomness of the bootstrapping of the samples used
when building trees (if ``bootstrap=True``) and the sampling of the
features to consider when looking for the best split at each node
(if ``max_features < n_features``).
See :term:`Glossary ` for details.
None
verbose verbose: int, default=0

Controls the verbosity when fitting and predicting.
0
warm_start warm_start: bool, default=False

When set to ``True``, reuse the solution of the previous call to fit
and add more estimators to the ensemble, otherwise, just fit a whole
new forest. See :term:`Glossary ` and
:ref:`tree_ensemble_warm_start` for details.
False
ccp_alpha ccp_alpha: non-negative float, default=0.0

Complexity parameter used for Minimal Cost-Complexity Pruning. The
subtree with the largest cost complexity that is smaller than
``ccp_alpha`` will be chosen. By default, no pruning is performed. See
:ref:`minimal_cost_complexity_pruning` for details. See
:ref:`sphx_glr_auto_examples_tree_plot_cost_complexity_pruning.py`
for an example of such pruning.

.. versionadded:: 0.22
0.0
max_samples max_samples: int or float, default=None

If bootstrap is True, the number of samples to draw from X
to train each base estimator.

- If None (default), then draw `X.shape[0]` samples.
- If int, then draw `max_samples` samples.
- If float, then draw `max(round(n_samples * max_samples), 1)` samples. Thus,
`max_samples` should be in the interval `(0.0, 1.0]`.

.. versionadded:: 0.22
None
monotonic_cst monotonic_cst: array-like of int of shape (n_features), default=None

Indicates the monotonicity constraint to enforce on each feature.
- 1: monotonically increasing
- 0: no constraint
- -1: monotonically decreasing

If monotonic_cst is None, no constraints are applied.

Monotonicity constraints are not supported for:
- multioutput regressions (i.e. when `n_outputs_ > 1`),
- regressions trained on data with missing values.

Read more in the :ref:`User Guide `.

.. versionadded:: 1.4
None
In [135]:
gridsearch.best_params_
Out[135]:
{'regressor__max_depth': 42,
 'regressor__min_samples_leaf': 2,
 'regressor__n_estimators': 500}
In [136]:
Y_pred=gridsearch.predict(X_test)
rmse = root_mean_squared_error(Y_test, Y_pred)
print('RMSE:', rmse)

mse = mean_squared_error(Y_test, Y_pred)
print('MSE:', mse)

r2 = r2_score(Y_test, Y_pred)
print('R2:', r2)

mae = mean_absolute_error(Y_test, Y_pred)
print('MAE:', mae)
RMSE: 6917.5612686303575
MSE: 47852653.90525484
R2: 0.7550342548125436
MAE: 4037.022539785671
In [137]:
Y_pred=gridsearch.predict(X_train)
rmse = root_mean_squared_error(Y_train, Y_pred)
print('RMSE:', rmse)

mse = mean_squared_error(Y_train, Y_pred)
print('MSE:', mse)

r2 = r2_score(Y_train, Y_pred)
print('R2:', r2)

mae = mean_absolute_error(Y_train, Y_pred)
print('MAE:', mae)
RMSE: 3803.92341136482
MSE: 14469833.319529368
R2: 0.9263054645828711
MAE: 2103.801159563213

XGboost Regressor¶

In [125]:
pipelinexg
Out[125]:
Pipeline(steps=[('CT',
                 ColumnTransformer(remainder='passthrough',
                                   transformers=[('ohe',
                                                  OneHotEncoder(handle_unknown='ignore'),
                                                  ['Category', 'Fuel type',
                                                   'Color', 'Gear box type',
                                                   'Drive wheels', 'Wheel',
                                                   'Doors']),
                                                 ('pt_std', PowerTransformer(),
                                                  ['Mileage']),
                                                 ('std', StandardScaler(),
                                                  ['Airbags']),
                                                 ('ode', OrdinalEncoder(),
                                                  ['Leather interior']),
                                                 ('ohe1',
                                                  OneHotEncoder(han...
                              feature_types=None, feature_weights=None,
                              gamma=None, grow_policy=None,
                              importance_type=None,
                              interaction_constraints=None, learning_rate=None,
                              max_bin=None, max_cat_threshold=None,
                              max_cat_to_onehot=None, max_delta_step=None,
                              max_depth=None, max_leaves=None,
                              min_child_weight=None, missing=nan,
                              monotone_constraints=None, multi_strategy=None,
                              n_estimators=None, n_jobs=None,
                              num_parallel_tree=None, ...))])
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.
Parameters
steps steps: list of tuples

List of (name of step, estimator) tuples that are to be chained in
sequential order. To be compatible with the scikit-learn API, all steps
must define `fit`. All non-last steps must also define `transform`. See
:ref:`Combining Estimators ` for more details.
[('CT', ...), ('regressor', ...)]
transform_input transform_input: list of str, default=None

The names of the :term:`metadata` parameters that should be transformed by the
pipeline before passing it to the step consuming it.

This enables transforming some input arguments to ``fit`` (other than ``X``)
to be transformed by the steps of the pipeline up to the step which requires
them. Requirement is defined via :ref:`metadata routing `.
For instance, this can be used to pass a validation set through the pipeline.

You can only set this if metadata routing is enabled, which you
can enable using ``sklearn.set_config(enable_metadata_routing=True)``.

.. versionadded:: 1.6
None
memory memory: str or object with the joblib.Memory interface, default=None

Used to cache the fitted transformers of the pipeline. The last step
will never be cached, even if it is a transformer. By default, no
caching is performed. If a string is given, it is the path to the
caching directory. Enabling caching triggers a clone of the transformers
before fitting. Therefore, the transformer instance given to the
pipeline cannot be inspected directly. Use the attribute ``named_steps``
or ``steps`` to inspect estimators within the pipeline. Caching the
transformers is advantageous when fitting is time consuming. See
:ref:`sphx_glr_auto_examples_neighbors_plot_caching_nearest_neighbors.py`
for an example on how to enable caching.
None
verbose verbose: bool, default=False

If True, the time elapsed while fitting each step will be printed as it
is completed.
False
Parameters
transformers transformers: list of tuples

List of (name, transformer, columns) tuples specifying the
transformer objects to be applied to subsets of the data.

name : str
Like in Pipeline and FeatureUnion, this allows the transformer and
its parameters to be set using ``set_params`` and searched in grid
search.
transformer : {'drop', 'passthrough'} or estimator
Estimator must support :term:`fit` and :term:`transform`.
Special-cased strings 'drop' and 'passthrough' are accepted as
well, to indicate to drop the columns or to pass them through
untransformed, respectively.
columns : str, array-like of str, int, array-like of int, array-like of bool, slice or callable
Indexes the data on its second axis. Integers are interpreted as
positional columns, while strings can reference DataFrame columns
by name. A scalar string or int should be used where
``transformer`` expects X to be a 1d array-like (vector),
otherwise a 2d array will be passed to the transformer.
A callable is passed the input data `X` and can return any of the
above. To select multiple columns by name or dtype, you can use
:obj:`make_column_selector`.
[('ohe', ...), ('pt_std', ...), ...]
remainder remainder: {'drop', 'passthrough'} or estimator, default='drop'

By default, only the specified columns in `transformers` are
transformed and combined in the output, and the non-specified
columns are dropped. (default of ``'drop'``).
By specifying ``remainder='passthrough'``, all remaining columns that
were not specified in `transformers`, but present in the data passed
to `fit` will be automatically passed through. This subset of columns
is concatenated with the output of the transformers. For dataframes,
extra columns not seen during `fit` will be excluded from the output
of `transform`.
By setting ``remainder`` to be an estimator, the remaining
non-specified columns will use the ``remainder`` estimator. The
estimator must support :term:`fit` and :term:`transform`.
Note that using this feature requires that the DataFrame columns
input at :term:`fit` and :term:`transform` have identical order.
'passthrough'
sparse_threshold sparse_threshold: float, default=0.3

If the output of the different transformers contains sparse matrices,
these will be stacked as a sparse matrix if the overall density is
lower than this value. Use ``sparse_threshold=0`` to always return
dense. When the transformed output consists of all dense data, the
stacked result will be dense, and this keyword will be ignored.
0.3
n_jobs n_jobs: int, default=None

Number of jobs to run in parallel.
``None`` means 1 unless in a :obj:`joblib.parallel_backend` context.
``-1`` means using all processors. See :term:`Glossary `
for more details.
None
transformer_weights transformer_weights: dict, default=None

Multiplicative weights for features per transformer. The output of the
transformer is multiplied by these weights. Keys are transformer names,
values the weights.
None
verbose verbose: bool, default=False

If True, the time elapsed while fitting each transformer will be
printed as it is completed.
False
verbose_feature_names_out verbose_feature_names_out: bool, str or Callable[[str, str], str], default=True

- If True, :meth:`ColumnTransformer.get_feature_names_out` will prefix
all feature names with the name of the transformer that generated that
feature. It is equivalent to setting
`verbose_feature_names_out="{transformer_name}__{feature_name}"`.
- If False, :meth:`ColumnTransformer.get_feature_names_out` will not
prefix any feature names and will error if feature names are not
unique.
- If ``Callable[[str, str], str]``,
:meth:`ColumnTransformer.get_feature_names_out` will rename all the features
using the name of the transformer. The first argument of the callable is the
transformer name and the second argument is the feature name. The returned
string will be the new feature name.
- If ``str``, it must be a string ready for formatting. The given string will
be formatted using two field names: ``transformer_name`` and ``feature_name``.
e.g. ``"{feature_name}__{transformer_name}"``. See :meth:`str.format` method
from the standard library for more info.

.. versionadded:: 1.0

.. versionchanged:: 1.6
`verbose_feature_names_out` can be a callable or a string to be formatted.
True
force_int_remainder_cols force_int_remainder_cols: bool, default=False

This parameter has no effect.

.. note::
If you do not access the list of columns for the remainder columns
in the `transformers_` fitted attribute, you do not need to set
this parameter.

.. versionadded:: 1.5

.. versionchanged:: 1.7
The default value for `force_int_remainder_cols` will change from
`True` to `False` in version 1.7.

.. deprecated:: 1.7
`force_int_remainder_cols` is deprecated and will be removed in 1.9.
'deprecated'
['Category', 'Fuel type', 'Color', 'Gear box type', 'Drive wheels', 'Wheel', 'Doors']
Parameters
categories categories: 'auto' or a list of array-like, default='auto'

Categories (unique values) per feature:

- 'auto' : Determine categories automatically from the training data.
- list : ``categories[i]`` holds the categories expected in the ith
column. The passed categories should not mix strings and numeric
values within a single feature, and should be sorted in case of
numeric values.

The used categories can be found in the ``categories_`` attribute.

.. versionadded:: 0.20
'auto'
drop drop: {'first', 'if_binary'} or an array-like of shape (n_features,), default=None

Specifies a methodology to use to drop one of the categories per
feature. This is useful in situations where perfectly collinear
features cause problems, such as when feeding the resulting data
into an unregularized linear regression model.

However, dropping one category breaks the symmetry of the original
representation and can therefore induce a bias in downstream models,
for instance for penalized linear classification or regression models.

- None : retain all features (the default).
- 'first' : drop the first category in each feature. If only one
category is present, the feature will be dropped entirely.
- 'if_binary' : drop the first category in each feature with two
categories. Features with 1 or more than 2 categories are
left intact.
- array : ``drop[i]`` is the category in feature ``X[:, i]`` that
should be dropped.

When `max_categories` or `min_frequency` is configured to group
infrequent categories, the dropping behavior is handled after the
grouping.

.. versionadded:: 0.21
The parameter `drop` was added in 0.21.

.. versionchanged:: 0.23
The option `drop='if_binary'` was added in 0.23.

.. versionchanged:: 1.1
Support for dropping infrequent categories.
None
sparse_output sparse_output: bool, default=True

When ``True``, it returns a :class:`scipy.sparse.csr_matrix`,
i.e. a sparse matrix in "Compressed Sparse Row" (CSR) format.

.. versionadded:: 1.2
`sparse` was renamed to `sparse_output`
True
dtype dtype: number type, default=np.float64

Desired dtype of output.
<class 'numpy.float64'>
handle_unknown handle_unknown: {'error', 'ignore', 'infrequent_if_exist', 'warn'}, default='error'

Specifies the way unknown categories are handled during :meth:`transform`.

- 'error' : Raise an error if an unknown category is present during transform.
- 'ignore' : When an unknown category is encountered during
transform, the resulting one-hot encoded columns for this feature
will be all zeros. In the inverse transform, an unknown category
will be denoted as None.
- 'infrequent_if_exist' : When an unknown category is encountered
during transform, the resulting one-hot encoded columns for this
feature will map to the infrequent category if it exists. The
infrequent category will be mapped to the last position in the
encoding. During inverse transform, an unknown category will be
mapped to the category denoted `'infrequent'` if it exists. If the
`'infrequent'` category does not exist, then :meth:`transform` and
:meth:`inverse_transform` will handle an unknown category as with
`handle_unknown='ignore'`. Infrequent categories exist based on
`min_frequency` and `max_categories`. Read more in the
:ref:`User Guide `.
- 'warn' : When an unknown category is encountered during transform
a warning is issued, and the encoding then proceeds as described for
`handle_unknown="infrequent_if_exist"`.

.. versionchanged:: 1.1
`'infrequent_if_exist'` was added to automatically handle unknown
categories and infrequent categories.

.. versionadded:: 1.6
The option `"warn"` was added in 1.6.
'ignore'
min_frequency min_frequency: int or float, default=None

Specifies the minimum frequency below which a category will be
considered infrequent.

- If `int`, categories with a smaller cardinality will be considered
infrequent.

- If `float`, categories with a smaller cardinality than
`min_frequency * n_samples` will be considered infrequent.

.. versionadded:: 1.1
Read more in the :ref:`User Guide `.
None
max_categories max_categories: int, default=None

Specifies an upper limit to the number of output features for each input
feature when considering infrequent categories. If there are infrequent
categories, `max_categories` includes the category representing the
infrequent categories along with the frequent categories. If `None`,
there is no limit to the number of output features.

.. versionadded:: 1.1
Read more in the :ref:`User Guide `.
None
feature_name_combiner feature_name_combiner: "concat" or callable, default="concat"

Callable with signature `def callable(input_feature, category)` that returns a
string. This is used to create feature names to be returned by
:meth:`get_feature_names_out`.

`"concat"` concatenates encoded feature name and category with
`feature + "_" + str(category)`.E.g. feature X with values 1, 6, 7 create
feature names `X_1, X_6, X_7`.

.. versionadded:: 1.3
'concat'
['Mileage']
Parameters
method method: {'yeo-johnson', 'box-cox'}, default='yeo-johnson'

The power transform method. Available methods are:

- 'yeo-johnson' [1]_, works with positive and negative values
- 'box-cox' [2]_, only works with strictly positive values
'yeo-johnson'
standardize standardize: bool, default=True

Set to True to apply zero-mean, unit-variance normalization to the
transformed output.
True
copy copy: bool, default=True

Set to False to perform inplace computation during transformation.
True
['Airbags']
Parameters
copy copy: bool, default=True

If False, try to avoid a copy and do inplace scaling instead.
This is not guaranteed to always work inplace; e.g. if the data is
not a NumPy array or scipy.sparse CSR matrix, a copy may still be
returned.
True
with_mean with_mean: bool, default=True

If True, center the data before scaling.
This does not work (and will raise an exception) when attempted on
sparse matrices, because centering them entails building a dense
matrix which in common use cases is likely to be too large to fit in
memory.
True
with_std with_std: bool, default=True

If True, scale the data to unit variance (or equivalently,
unit standard deviation).
True
['Leather interior']
Parameters
categories categories: 'auto' or a list of array-like, default='auto'

Categories (unique values) per feature:

- 'auto' : Determine categories automatically from the training data.
- list : ``categories[i]`` holds the categories expected in the ith
column. The passed categories should not mix strings and numeric
values, and should be sorted in case of numeric values.

The used categories can be found in the ``categories_`` attribute.
'auto'
dtype dtype: number type, default=np.float64

Desired dtype of output.
<class 'numpy.float64'>
handle_unknown handle_unknown: {'error', 'use_encoded_value'}, default='error'

When set to 'error' an error will be raised in case an unknown
categorical feature is present during transform. When set to
'use_encoded_value', the encoded value of unknown categories will be
set to the value given for the parameter `unknown_value`. In
:meth:`inverse_transform`, an unknown category will be denoted as None.

.. versionadded:: 0.24
'error'
unknown_value unknown_value: int or np.nan, default=None

When the parameter handle_unknown is set to 'use_encoded_value', this
parameter is required and will set the encoded value of unknown
categories. It has to be distinct from the values used to encode any of
the categories in `fit`. If set to np.nan, the `dtype` parameter must
be a float dtype.

.. versionadded:: 0.24
None
encoded_missing_value encoded_missing_value: int or np.nan, default=np.nan

Encoded value of missing categories. If set to `np.nan`, then the `dtype`
parameter must be a float dtype.

.. versionadded:: 1.1
nan
min_frequency min_frequency: int or float, default=None

Specifies the minimum frequency below which a category will be
considered infrequent.

- If `int`, categories with a smaller cardinality will be considered
infrequent.

- If `float`, categories with a smaller cardinality than
`min_frequency * n_samples` will be considered infrequent.

.. versionadded:: 1.3
Read more in the :ref:`User Guide `.
None
max_categories max_categories: int, default=None

Specifies an upper limit to the number of output categories for each input
feature when considering infrequent categories. If there are infrequent
categories, `max_categories` includes the category representing the
infrequent categories along with the frequent categories. If `None`,
there is no limit to the number of output features.

`max_categories` do **not** take into account missing or unknown
categories. Setting `unknown_value` or `encoded_missing_value` to an
integer will increase the number of unique integer codes by one each.
This can result in up to `max_categories + 2` integer codes.

.. versionadded:: 1.3
Read more in the :ref:`User Guide `.
None
['Manufacturer']
Parameters
categories categories: 'auto' or a list of array-like, default='auto'

Categories (unique values) per feature:

- 'auto' : Determine categories automatically from the training data.
- list : ``categories[i]`` holds the categories expected in the ith
column. The passed categories should not mix strings and numeric
values within a single feature, and should be sorted in case of
numeric values.

The used categories can be found in the ``categories_`` attribute.

.. versionadded:: 0.20
'auto'
drop drop: {'first', 'if_binary'} or an array-like of shape (n_features,), default=None

Specifies a methodology to use to drop one of the categories per
feature. This is useful in situations where perfectly collinear
features cause problems, such as when feeding the resulting data
into an unregularized linear regression model.

However, dropping one category breaks the symmetry of the original
representation and can therefore induce a bias in downstream models,
for instance for penalized linear classification or regression models.

- None : retain all features (the default).
- 'first' : drop the first category in each feature. If only one
category is present, the feature will be dropped entirely.
- 'if_binary' : drop the first category in each feature with two
categories. Features with 1 or more than 2 categories are
left intact.
- array : ``drop[i]`` is the category in feature ``X[:, i]`` that
should be dropped.

When `max_categories` or `min_frequency` is configured to group
infrequent categories, the dropping behavior is handled after the
grouping.

.. versionadded:: 0.21
The parameter `drop` was added in 0.21.

.. versionchanged:: 0.23
The option `drop='if_binary'` was added in 0.23.

.. versionchanged:: 1.1
Support for dropping infrequent categories.
None
sparse_output sparse_output: bool, default=True

When ``True``, it returns a :class:`scipy.sparse.csr_matrix`,
i.e. a sparse matrix in "Compressed Sparse Row" (CSR) format.

.. versionadded:: 1.2
`sparse` was renamed to `sparse_output`
False
dtype dtype: number type, default=np.float64

Desired dtype of output.
<class 'numpy.float64'>
handle_unknown handle_unknown: {'error', 'ignore', 'infrequent_if_exist', 'warn'}, default='error'

Specifies the way unknown categories are handled during :meth:`transform`.

- 'error' : Raise an error if an unknown category is present during transform.
- 'ignore' : When an unknown category is encountered during
transform, the resulting one-hot encoded columns for this feature
will be all zeros. In the inverse transform, an unknown category
will be denoted as None.
- 'infrequent_if_exist' : When an unknown category is encountered
during transform, the resulting one-hot encoded columns for this
feature will map to the infrequent category if it exists. The
infrequent category will be mapped to the last position in the
encoding. During inverse transform, an unknown category will be
mapped to the category denoted `'infrequent'` if it exists. If the
`'infrequent'` category does not exist, then :meth:`transform` and
:meth:`inverse_transform` will handle an unknown category as with
`handle_unknown='ignore'`. Infrequent categories exist based on
`min_frequency` and `max_categories`. Read more in the
:ref:`User Guide `.
- 'warn' : When an unknown category is encountered during transform
a warning is issued, and the encoding then proceeds as described for
`handle_unknown="infrequent_if_exist"`.

.. versionchanged:: 1.1
`'infrequent_if_exist'` was added to automatically handle unknown
categories and infrequent categories.

.. versionadded:: 1.6
The option `"warn"` was added in 1.6.
'infrequent_if_exist'
min_frequency min_frequency: int or float, default=None

Specifies the minimum frequency below which a category will be
considered infrequent.

- If `int`, categories with a smaller cardinality will be considered
infrequent.

- If `float`, categories with a smaller cardinality than
`min_frequency * n_samples` will be considered infrequent.

.. versionadded:: 1.1
Read more in the :ref:`User Guide `.
0.0025
max_categories max_categories: int, default=None

Specifies an upper limit to the number of output features for each input
feature when considering infrequent categories. If there are infrequent
categories, `max_categories` includes the category representing the
infrequent categories along with the frequent categories. If `None`,
there is no limit to the number of output features.

.. versionadded:: 1.1
Read more in the :ref:`User Guide `.
None
feature_name_combiner feature_name_combiner: "concat" or callable, default="concat"

Callable with signature `def callable(input_feature, category)` that returns a
string. This is used to create feature names to be returned by
:meth:`get_feature_names_out`.

`"concat"` concatenates encoded feature name and category with
`feature + "_" + str(category)`.E.g. feature X with values 1, 6, 7 create
feature names `X_1, X_6, X_7`.

.. versionadded:: 1.3
'concat'
['Levy', 'Prod. year', 'Cylinders', 'Engine volume']
Parameters
copy copy: bool, default=True

If False, try to avoid a copy and do inplace scaling instead.
This is not guaranteed to always work inplace; e.g. if the data is
not a NumPy array or scipy.sparse CSR matrix, a copy may still be
returned.
True
with_mean with_mean: bool, default=True

If True, center the data before scaling.
This does not work (and will raise an exception) when attempted on
sparse matrices, because centering them entails building a dense
matrix which in common use cases is likely to be too large to fit in
memory.
True
with_std with_std: bool, default=True

If True, scale the data to unit variance (or equivalently,
unit standard deviation).
True
Parameters
missing_values missing_values: int, float, str, np.nan or None, default=np.nan

The placeholder for the missing values. All occurrences of
`missing_values` will be imputed. For pandas' dataframes with
nullable integer dtypes with missing values, `missing_values`
should be set to np.nan, since `pd.NA` will be converted to np.nan.
nan
n_neighbors n_neighbors: int, default=5

Number of neighboring samples to use for imputation.
5
weights weights: {'uniform', 'distance'} or callable, default='uniform'

Weight function used in prediction. Possible values:

- 'uniform' : uniform weights. All points in each neighborhood are
weighted equally.
- 'distance' : weight points by the inverse of their distance.
in this case, closer neighbors of a query point will have a
greater influence than neighbors which are further away.
- callable : a user-defined function which accepts an
array of distances, and returns an array of the same shape
containing the weights.
'distance'
metric metric: {'nan_euclidean'} or callable, default='nan_euclidean'

Distance metric for searching neighbors. Possible values:

- 'nan_euclidean'
- callable : a user-defined function which conforms to the definition
of ``func_metric(x, y, *, missing_values=np.nan)``. `x` and `y`
corresponds to a row (i.e. 1-D arrays) of `X` and `Y`, respectively.
The callable should returns a scalar distance value.
'nan_euclidean'
copy copy: bool, default=True

If True, a copy of X will be created. If False, imputation will
be done in-place whenever possible.
True
add_indicator add_indicator: bool, default=False

If True, a :class:`MissingIndicator` transform will stack onto the
output of the imputer's transform. This allows a predictive estimator
to account for missingness despite imputation. If a feature has no
missing values at fit/train time, the feature won't appear on the
missing indicator even if there are missing values at transform/test
time.
False
keep_empty_features keep_empty_features: bool, default=False

If True, features that consist exclusively of missing values when
`fit` is called are returned in results when `transform` is called.
The imputed value is always `0`.

.. versionadded:: 1.2
False
['Age', 'Mileage_ratio']
passthrough
Parameters
objective objective: typing.Union[str, xgboost.sklearn._SklObjWProto, typing.Callable[[typing.Any, typing.Any], typing.Tuple[numpy.ndarray, numpy.ndarray]], NoneType]

Specify the learning task and the corresponding learning objective or a custom
objective function to be used.

For custom objective, see :doc:`/tutorials/custom_metric_obj` and
:ref:`custom-obj-metric` for more information, along with the end note for
function signatures.
'reg:squarederror'
base_score base_score: typing.Union[float, typing.List[float], NoneType]

The initial prediction score of all instances, global bias.
None
booster None
callbacks callbacks: typing.Optional[typing.List[xgboost.callback.TrainingCallback]]

List of callback functions that are applied at end of each iteration.
It is possible to use predefined callbacks by using
:ref:`Callback API `.

.. note::

States in callback are not preserved during training, which means callback
objects can not be reused for multiple training sessions without
reinitialization or deepcopy.

.. code-block:: python

for params in parameters_grid:
# be sure to (re)initialize the callbacks before each run
callbacks = [xgb.callback.LearningRateScheduler(custom_rates)]
reg = xgboost.XGBRegressor(**params, callbacks=callbacks)
reg.fit(X, y)
None
colsample_bylevel colsample_bylevel: typing.Optional[float]

Subsample ratio of columns for each level.
None
colsample_bynode colsample_bynode: typing.Optional[float]

Subsample ratio of columns for each split.
None
colsample_bytree colsample_bytree: typing.Optional[float]

Subsample ratio of columns when constructing each tree.
None
device device: typing.Optional[str]

.. versionadded:: 2.0.0

Device ordinal, available options are `cpu`, `cuda`, and `gpu`.
None
early_stopping_rounds early_stopping_rounds: typing.Optional[int]

.. versionadded:: 1.6.0

- Activates early stopping. Validation metric needs to improve at least once in
every **early_stopping_rounds** round(s) to continue training. Requires at
least one item in **eval_set** in :py:meth:`fit`.

- If early stopping occurs, the model will have two additional attributes:
:py:attr:`best_score` and :py:attr:`best_iteration`. These are used by the
:py:meth:`predict` and :py:meth:`apply` methods to determine the optimal
number of trees during inference. If users want to access the full model
(including trees built after early stopping), they can specify the
`iteration_range` in these inference methods. In addition, other utilities
like model plotting can also use the entire model.

- If you prefer to discard the trees after `best_iteration`, consider using the
callback function :py:class:`xgboost.callback.EarlyStopping`.

- If there's more than one item in **eval_set**, the last entry will be used for
early stopping. If there's more than one metric in **eval_metric**, the last
metric will be used for early stopping.
None
enable_categorical enable_categorical: bool

See the same parameter of :py:class:`DMatrix` for details.
False
eval_metric eval_metric: typing.Union[str, typing.List[typing.Union[str, typing.Callable]], typing.Callable, NoneType]

.. versionadded:: 1.6.0

Metric used for monitoring the training result and early stopping. It can be a
string or list of strings as names of predefined metric in XGBoost (See
:doc:`/parameter`), one of the metrics in :py:mod:`sklearn.metrics`, or any
other user defined metric that looks like `sklearn.metrics`.

If custom objective is also provided, then custom metric should implement the
corresponding reverse link function.

Unlike the `scoring` parameter commonly used in scikit-learn, when a callable
object is provided, it's assumed to be a cost function and by default XGBoost
will minimize the result during early stopping.

For advanced usage on Early stopping like directly choosing to maximize instead
of minimize, see :py:obj:`xgboost.callback.EarlyStopping`.

See :doc:`/tutorials/custom_metric_obj` and :ref:`custom-obj-metric` for more
information.

.. code-block:: python

from sklearn.datasets import load_diabetes
from sklearn.metrics import mean_absolute_error
X, y = load_diabetes(return_X_y=True)
reg = xgb.XGBRegressor(
tree_method="hist",
eval_metric=mean_absolute_error,
)
reg.fit(X, y, eval_set=[(X, y)])
None
feature_types feature_types: typing.Optional[typing.Sequence[str]]

.. versionadded:: 1.7.0

Used for specifying feature types without constructing a dataframe. See
the :py:class:`DMatrix` for details.
None
feature_weights feature_weights: Optional[ArrayLike]

Weight for each feature, defines the probability of each feature being selected
when colsample is being used. All values must be greater than 0, otherwise a
`ValueError` is thrown.
None
gamma gamma: typing.Optional[float]

(min_split_loss) Minimum loss reduction required to make a further partition on
a leaf node of the tree.
None
grow_policy grow_policy: typing.Optional[str]

Tree growing policy.

- depthwise: Favors splitting at nodes closest to the node,
- lossguide: Favors splitting at nodes with highest loss change.
None
importance_type None
interaction_constraints interaction_constraints: typing.Union[str, typing.List[typing.Tuple[str]], NoneType]

Constraints for interaction representing permitted interactions. The
constraints must be specified in the form of a nested list, e.g. ``[[0, 1], [2,
3, 4]]``, where each inner list is a group of indices of features that are
allowed to interact with each other. See :doc:`tutorial
` for more information
None
learning_rate learning_rate: typing.Optional[float]

Boosting learning rate (xgb's "eta")
None
max_bin max_bin: typing.Optional[int]

If using histogram-based algorithm, maximum number of bins per feature
None
max_cat_threshold max_cat_threshold: typing.Optional[int]

.. versionadded:: 1.7.0

.. note:: This parameter is experimental

Maximum number of categories considered for each split. Used only by
partition-based splits for preventing over-fitting. Also, `enable_categorical`
needs to be set to have categorical feature support. See :doc:`Categorical Data
` and :ref:`cat-param` for details.
None
max_cat_to_onehot max_cat_to_onehot: Optional[int]

.. versionadded:: 1.6.0

.. note:: This parameter is experimental

A threshold for deciding whether XGBoost should use one-hot encoding based split
for categorical data. When number of categories is lesser than the threshold
then one-hot encoding is chosen, otherwise the categories will be partitioned
into children nodes. Also, `enable_categorical` needs to be set to have
categorical feature support. See :doc:`Categorical Data
` and :ref:`cat-param` for details.
None
max_delta_step max_delta_step: typing.Optional[float]

Maximum delta step we allow each tree's weight estimation to be.
None
max_depth max_depth: typing.Optional[int]

Maximum tree depth for base learners.
None
max_leaves max_leaves: typing.Optional[int]

Maximum number of leaves; 0 indicates no limit.
None
min_child_weight min_child_weight: typing.Optional[float]

Minimum sum of instance weight(hessian) needed in a child.
None
missing missing: float

Value in the data which needs to be present as a missing value. Default to
:py:data:`numpy.nan`.
nan
monotone_constraints monotone_constraints: typing.Union[typing.Dict[str, int], str, NoneType]

Constraint of variable monotonicity. See :doc:`tutorial `
for more information.
None
multi_strategy multi_strategy: typing.Optional[str]

.. versionadded:: 2.0.0

.. note:: This parameter is working-in-progress.

The strategy used for training multi-target models, including multi-target
regression and multi-class classification. See :doc:`/tutorials/multioutput` for
more information.

- ``one_output_per_tree``: One model for each target.
- ``multi_output_tree``: Use multi-target trees.
None
n_estimators n_estimators: typing.Optional[int]

Number of gradient boosted trees. Equivalent to number of boosting
rounds.
None
n_jobs n_jobs: typing.Optional[int]

Number of parallel threads used to run xgboost. When used with other
Scikit-Learn algorithms like grid search, you may choose which algorithm to
parallelize and balance the threads. Creating thread contention will
significantly slow down both algorithms.
None
num_parallel_tree None
random_state random_state: typing.Union[numpy.random.mtrand.RandomState, numpy.random._generator.Generator, int, NoneType]

Random number seed.

.. note::

Using gblinear booster with shotgun updater is nondeterministic as
it uses Hogwild algorithm.
None
reg_alpha reg_alpha: typing.Optional[float]

L1 regularization term on weights (xgb's alpha).
None
reg_lambda reg_lambda: typing.Optional[float]

L2 regularization term on weights (xgb's lambda).
None
sampling_method sampling_method: typing.Optional[str]

Sampling method. Used only by the GPU version of ``hist`` tree method.

- ``uniform``: Select random training instances uniformly.
- ``gradient_based``: Select random training instances with higher probability
when the gradient and hessian are larger. (cf. CatBoost)
None
scale_pos_weight scale_pos_weight: typing.Optional[float]

Balancing of positive and negative weights.
None
subsample subsample: typing.Optional[float]

Subsample ratio of the training instance.
None
tree_method tree_method: typing.Optional[str]

Specify which tree method to use. Default to auto. If this parameter is set to
default, XGBoost will choose the most conservative option available. It's
recommended to study this option from the parameters document :doc:`tree method
`
None
validate_parameters validate_parameters: typing.Optional[bool]

Give warnings for unknown parameter.
None
verbosity verbosity: typing.Optional[int]

The degree of verbosity. Valid values are 0 (silent) - 3 (debug).
None
In [49]:
params={"regressor__learning_rate":[0.1,0.2],
"regressor__max_depth":[5,8],
"regressor__min_child_weight":[2,3],
"regressor__reg_lambda":[0.5,1,2]}


gridsearchXG=GridSearchCV(pipelinexg,param_grid=params,cv=5,scoring='r2',verbose=2).fit(X_train,Y_train)
gridsearchXG
Fitting 5 folds for each of 24 candidates, totalling 120 fits
[CV] END regressor__learning_rate=0.1, regressor__max_depth=5, regressor__min_child_weight=2, regressor__reg_lambda=0.5; total time=   3.1s
[CV] END regressor__learning_rate=0.1, regressor__max_depth=5, regressor__min_child_weight=2, regressor__reg_lambda=0.5; total time=   3.1s
[CV] END regressor__learning_rate=0.1, regressor__max_depth=5, regressor__min_child_weight=2, regressor__reg_lambda=0.5; total time=   3.1s
[CV] END regressor__learning_rate=0.1, regressor__max_depth=5, regressor__min_child_weight=2, regressor__reg_lambda=0.5; total time=   3.1s
[CV] END regressor__learning_rate=0.1, regressor__max_depth=5, regressor__min_child_weight=2, regressor__reg_lambda=0.5; total time=   3.2s
[CV] END regressor__learning_rate=0.1, regressor__max_depth=5, regressor__min_child_weight=2, regressor__reg_lambda=1; total time=   3.2s
[CV] END regressor__learning_rate=0.1, regressor__max_depth=5, regressor__min_child_weight=2, regressor__reg_lambda=1; total time=   3.0s
[CV] END regressor__learning_rate=0.1, regressor__max_depth=5, regressor__min_child_weight=2, regressor__reg_lambda=1; total time=   3.0s
[CV] END regressor__learning_rate=0.1, regressor__max_depth=5, regressor__min_child_weight=2, regressor__reg_lambda=1; total time=   3.4s
[CV] END regressor__learning_rate=0.1, regressor__max_depth=5, regressor__min_child_weight=2, regressor__reg_lambda=1; total time=   3.6s
[CV] END regressor__learning_rate=0.1, regressor__max_depth=5, regressor__min_child_weight=2, regressor__reg_lambda=2; total time=   3.7s
[CV] END regressor__learning_rate=0.1, regressor__max_depth=5, regressor__min_child_weight=2, regressor__reg_lambda=2; total time=   3.6s
[CV] END regressor__learning_rate=0.1, regressor__max_depth=5, regressor__min_child_weight=2, regressor__reg_lambda=2; total time=   3.6s
[CV] END regressor__learning_rate=0.1, regressor__max_depth=5, regressor__min_child_weight=2, regressor__reg_lambda=2; total time=   3.4s
[CV] END regressor__learning_rate=0.1, regressor__max_depth=5, regressor__min_child_weight=2, regressor__reg_lambda=2; total time=   3.4s
[CV] END regressor__learning_rate=0.1, regressor__max_depth=5, regressor__min_child_weight=3, regressor__reg_lambda=0.5; total time=   3.4s
[CV] END regressor__learning_rate=0.1, regressor__max_depth=5, regressor__min_child_weight=3, regressor__reg_lambda=0.5; total time=   3.5s
[CV] END regressor__learning_rate=0.1, regressor__max_depth=5, regressor__min_child_weight=3, regressor__reg_lambda=0.5; total time=   3.5s
[CV] END regressor__learning_rate=0.1, regressor__max_depth=5, regressor__min_child_weight=3, regressor__reg_lambda=0.5; total time=   3.5s
[CV] END regressor__learning_rate=0.1, regressor__max_depth=5, regressor__min_child_weight=3, regressor__reg_lambda=0.5; total time=   3.5s
[CV] END regressor__learning_rate=0.1, regressor__max_depth=5, regressor__min_child_weight=3, regressor__reg_lambda=1; total time=   3.4s
[CV] END regressor__learning_rate=0.1, regressor__max_depth=5, regressor__min_child_weight=3, regressor__reg_lambda=1; total time=   3.4s
[CV] END regressor__learning_rate=0.1, regressor__max_depth=5, regressor__min_child_weight=3, regressor__reg_lambda=1; total time=   3.3s
[CV] END regressor__learning_rate=0.1, regressor__max_depth=5, regressor__min_child_weight=3, regressor__reg_lambda=1; total time=   3.1s
[CV] END regressor__learning_rate=0.1, regressor__max_depth=5, regressor__min_child_weight=3, regressor__reg_lambda=1; total time=   3.1s
[CV] END regressor__learning_rate=0.1, regressor__max_depth=5, regressor__min_child_weight=3, regressor__reg_lambda=2; total time=   3.3s
[CV] END regressor__learning_rate=0.1, regressor__max_depth=5, regressor__min_child_weight=3, regressor__reg_lambda=2; total time=   4.6s
[CV] END regressor__learning_rate=0.1, regressor__max_depth=5, regressor__min_child_weight=3, regressor__reg_lambda=2; total time=   3.6s
[CV] END regressor__learning_rate=0.1, regressor__max_depth=5, regressor__min_child_weight=3, regressor__reg_lambda=2; total time=   3.2s
[CV] END regressor__learning_rate=0.1, regressor__max_depth=5, regressor__min_child_weight=3, regressor__reg_lambda=2; total time=   3.4s
[CV] END regressor__learning_rate=0.1, regressor__max_depth=8, regressor__min_child_weight=2, regressor__reg_lambda=0.5; total time=   3.4s
[CV] END regressor__learning_rate=0.1, regressor__max_depth=8, regressor__min_child_weight=2, regressor__reg_lambda=0.5; total time=   3.3s
[CV] END regressor__learning_rate=0.1, regressor__max_depth=8, regressor__min_child_weight=2, regressor__reg_lambda=0.5; total time=   3.3s
[CV] END regressor__learning_rate=0.1, regressor__max_depth=8, regressor__min_child_weight=2, regressor__reg_lambda=0.5; total time=   3.4s
[CV] END regressor__learning_rate=0.1, regressor__max_depth=8, regressor__min_child_weight=2, regressor__reg_lambda=0.5; total time=   3.3s
[CV] END regressor__learning_rate=0.1, regressor__max_depth=8, regressor__min_child_weight=2, regressor__reg_lambda=1; total time=   3.4s
[CV] END regressor__learning_rate=0.1, regressor__max_depth=8, regressor__min_child_weight=2, regressor__reg_lambda=1; total time=   3.5s
[CV] END regressor__learning_rate=0.1, regressor__max_depth=8, regressor__min_child_weight=2, regressor__reg_lambda=1; total time=   3.3s
[CV] END regressor__learning_rate=0.1, regressor__max_depth=8, regressor__min_child_weight=2, regressor__reg_lambda=1; total time=   3.5s
[CV] END regressor__learning_rate=0.1, regressor__max_depth=8, regressor__min_child_weight=2, regressor__reg_lambda=1; total time=   3.4s
[CV] END regressor__learning_rate=0.1, regressor__max_depth=8, regressor__min_child_weight=2, regressor__reg_lambda=2; total time=   3.4s
[CV] END regressor__learning_rate=0.1, regressor__max_depth=8, regressor__min_child_weight=2, regressor__reg_lambda=2; total time=   3.5s
[CV] END regressor__learning_rate=0.1, regressor__max_depth=8, regressor__min_child_weight=2, regressor__reg_lambda=2; total time=   3.4s
[CV] END regressor__learning_rate=0.1, regressor__max_depth=8, regressor__min_child_weight=2, regressor__reg_lambda=2; total time=   3.5s
[CV] END regressor__learning_rate=0.1, regressor__max_depth=8, regressor__min_child_weight=2, regressor__reg_lambda=2; total time=   3.2s
[CV] END regressor__learning_rate=0.1, regressor__max_depth=8, regressor__min_child_weight=3, regressor__reg_lambda=0.5; total time=   3.4s
[CV] END regressor__learning_rate=0.1, regressor__max_depth=8, regressor__min_child_weight=3, regressor__reg_lambda=0.5; total time=   3.3s
[CV] END regressor__learning_rate=0.1, regressor__max_depth=8, regressor__min_child_weight=3, regressor__reg_lambda=0.5; total time=   3.4s
[CV] END regressor__learning_rate=0.1, regressor__max_depth=8, regressor__min_child_weight=3, regressor__reg_lambda=0.5; total time=   3.3s
[CV] END regressor__learning_rate=0.1, regressor__max_depth=8, regressor__min_child_weight=3, regressor__reg_lambda=0.5; total time=   3.3s
[CV] END regressor__learning_rate=0.1, regressor__max_depth=8, regressor__min_child_weight=3, regressor__reg_lambda=1; total time=   3.4s
[CV] END regressor__learning_rate=0.1, regressor__max_depth=8, regressor__min_child_weight=3, regressor__reg_lambda=1; total time=   3.3s
[CV] END regressor__learning_rate=0.1, regressor__max_depth=8, regressor__min_child_weight=3, regressor__reg_lambda=1; total time=   3.3s
[CV] END regressor__learning_rate=0.1, regressor__max_depth=8, regressor__min_child_weight=3, regressor__reg_lambda=1; total time=   3.4s
[CV] END regressor__learning_rate=0.1, regressor__max_depth=8, regressor__min_child_weight=3, regressor__reg_lambda=1; total time=   3.7s
[CV] END regressor__learning_rate=0.1, regressor__max_depth=8, regressor__min_child_weight=3, regressor__reg_lambda=2; total time=   3.6s
[CV] END regressor__learning_rate=0.1, regressor__max_depth=8, regressor__min_child_weight=3, regressor__reg_lambda=2; total time=   3.7s
[CV] END regressor__learning_rate=0.1, regressor__max_depth=8, regressor__min_child_weight=3, regressor__reg_lambda=2; total time=   3.6s
[CV] END regressor__learning_rate=0.1, regressor__max_depth=8, regressor__min_child_weight=3, regressor__reg_lambda=2; total time=   3.7s
[CV] END regressor__learning_rate=0.1, regressor__max_depth=8, regressor__min_child_weight=3, regressor__reg_lambda=2; total time=   3.6s
[CV] END regressor__learning_rate=0.2, regressor__max_depth=5, regressor__min_child_weight=2, regressor__reg_lambda=0.5; total time=   3.3s
[CV] END regressor__learning_rate=0.2, regressor__max_depth=5, regressor__min_child_weight=2, regressor__reg_lambda=0.5; total time=   3.3s
[CV] END regressor__learning_rate=0.2, regressor__max_depth=5, regressor__min_child_weight=2, regressor__reg_lambda=0.5; total time=   3.2s
[CV] END regressor__learning_rate=0.2, regressor__max_depth=5, regressor__min_child_weight=2, regressor__reg_lambda=0.5; total time=   3.2s
[CV] END regressor__learning_rate=0.2, regressor__max_depth=5, regressor__min_child_weight=2, regressor__reg_lambda=0.5; total time=   3.4s
[CV] END regressor__learning_rate=0.2, regressor__max_depth=5, regressor__min_child_weight=2, regressor__reg_lambda=1; total time=   3.4s
[CV] END regressor__learning_rate=0.2, regressor__max_depth=5, regressor__min_child_weight=2, regressor__reg_lambda=1; total time=   3.3s
[CV] END regressor__learning_rate=0.2, regressor__max_depth=5, regressor__min_child_weight=2, regressor__reg_lambda=1; total time=   3.6s
[CV] END regressor__learning_rate=0.2, regressor__max_depth=5, regressor__min_child_weight=2, regressor__reg_lambda=1; total time=   3.7s
[CV] END regressor__learning_rate=0.2, regressor__max_depth=5, regressor__min_child_weight=2, regressor__reg_lambda=1; total time=   3.4s
[CV] END regressor__learning_rate=0.2, regressor__max_depth=5, regressor__min_child_weight=2, regressor__reg_lambda=2; total time=   3.4s
[CV] END regressor__learning_rate=0.2, regressor__max_depth=5, regressor__min_child_weight=2, regressor__reg_lambda=2; total time=   3.4s
[CV] END regressor__learning_rate=0.2, regressor__max_depth=5, regressor__min_child_weight=2, regressor__reg_lambda=2; total time=   3.5s
[CV] END regressor__learning_rate=0.2, regressor__max_depth=5, regressor__min_child_weight=2, regressor__reg_lambda=2; total time=   3.3s
[CV] END regressor__learning_rate=0.2, regressor__max_depth=5, regressor__min_child_weight=2, regressor__reg_lambda=2; total time=   3.1s
[CV] END regressor__learning_rate=0.2, regressor__max_depth=5, regressor__min_child_weight=3, regressor__reg_lambda=0.5; total time=   3.2s
[CV] END regressor__learning_rate=0.2, regressor__max_depth=5, regressor__min_child_weight=3, regressor__reg_lambda=0.5; total time=   3.2s
[CV] END regressor__learning_rate=0.2, regressor__max_depth=5, regressor__min_child_weight=3, regressor__reg_lambda=0.5; total time=   3.2s
[CV] END regressor__learning_rate=0.2, regressor__max_depth=5, regressor__min_child_weight=3, regressor__reg_lambda=0.5; total time=   3.0s
[CV] END regressor__learning_rate=0.2, regressor__max_depth=5, regressor__min_child_weight=3, regressor__reg_lambda=0.5; total time=   3.2s
[CV] END regressor__learning_rate=0.2, regressor__max_depth=5, regressor__min_child_weight=3, regressor__reg_lambda=1; total time=   3.2s
[CV] END regressor__learning_rate=0.2, regressor__max_depth=5, regressor__min_child_weight=3, regressor__reg_lambda=1; total time=   3.2s
[CV] END regressor__learning_rate=0.2, regressor__max_depth=5, regressor__min_child_weight=3, regressor__reg_lambda=1; total time=   3.4s
[CV] END regressor__learning_rate=0.2, regressor__max_depth=5, regressor__min_child_weight=3, regressor__reg_lambda=1; total time=   3.7s
[CV] END regressor__learning_rate=0.2, regressor__max_depth=5, regressor__min_child_weight=3, regressor__reg_lambda=1; total time=   3.3s
[CV] END regressor__learning_rate=0.2, regressor__max_depth=5, regressor__min_child_weight=3, regressor__reg_lambda=2; total time=   3.2s
[CV] END regressor__learning_rate=0.2, regressor__max_depth=5, regressor__min_child_weight=3, regressor__reg_lambda=2; total time=   3.2s
[CV] END regressor__learning_rate=0.2, regressor__max_depth=5, regressor__min_child_weight=3, regressor__reg_lambda=2; total time=   3.2s
[CV] END regressor__learning_rate=0.2, regressor__max_depth=5, regressor__min_child_weight=3, regressor__reg_lambda=2; total time=   3.2s
[CV] END regressor__learning_rate=0.2, regressor__max_depth=5, regressor__min_child_weight=3, regressor__reg_lambda=2; total time=   3.1s
[CV] END regressor__learning_rate=0.2, regressor__max_depth=8, regressor__min_child_weight=2, regressor__reg_lambda=0.5; total time=   3.5s
[CV] END regressor__learning_rate=0.2, regressor__max_depth=8, regressor__min_child_weight=2, regressor__reg_lambda=0.5; total time=   3.3s
[CV] END regressor__learning_rate=0.2, regressor__max_depth=8, regressor__min_child_weight=2, regressor__reg_lambda=0.5; total time=   3.4s
[CV] END regressor__learning_rate=0.2, regressor__max_depth=8, regressor__min_child_weight=2, regressor__reg_lambda=0.5; total time=   3.4s
[CV] END regressor__learning_rate=0.2, regressor__max_depth=8, regressor__min_child_weight=2, regressor__reg_lambda=0.5; total time=   3.3s
[CV] END regressor__learning_rate=0.2, regressor__max_depth=8, regressor__min_child_weight=2, regressor__reg_lambda=1; total time=   3.5s
[CV] END regressor__learning_rate=0.2, regressor__max_depth=8, regressor__min_child_weight=2, regressor__reg_lambda=1; total time=   3.4s
[CV] END regressor__learning_rate=0.2, regressor__max_depth=8, regressor__min_child_weight=2, regressor__reg_lambda=1; total time=   3.4s
[CV] END regressor__learning_rate=0.2, regressor__max_depth=8, regressor__min_child_weight=2, regressor__reg_lambda=1; total time=   3.3s
[CV] END regressor__learning_rate=0.2, regressor__max_depth=8, regressor__min_child_weight=2, regressor__reg_lambda=1; total time=   3.3s
[CV] END regressor__learning_rate=0.2, regressor__max_depth=8, regressor__min_child_weight=2, regressor__reg_lambda=2; total time=   3.3s
[CV] END regressor__learning_rate=0.2, regressor__max_depth=8, regressor__min_child_weight=2, regressor__reg_lambda=2; total time=   3.4s
[CV] END regressor__learning_rate=0.2, regressor__max_depth=8, regressor__min_child_weight=2, regressor__reg_lambda=2; total time=   3.4s
[CV] END regressor__learning_rate=0.2, regressor__max_depth=8, regressor__min_child_weight=2, regressor__reg_lambda=2; total time=   3.4s
[CV] END regressor__learning_rate=0.2, regressor__max_depth=8, regressor__min_child_weight=2, regressor__reg_lambda=2; total time=   3.3s
[CV] END regressor__learning_rate=0.2, regressor__max_depth=8, regressor__min_child_weight=3, regressor__reg_lambda=0.5; total time=   3.6s
[CV] END regressor__learning_rate=0.2, regressor__max_depth=8, regressor__min_child_weight=3, regressor__reg_lambda=0.5; total time=   3.7s
[CV] END regressor__learning_rate=0.2, regressor__max_depth=8, regressor__min_child_weight=3, regressor__reg_lambda=0.5; total time=   3.3s
[CV] END regressor__learning_rate=0.2, regressor__max_depth=8, regressor__min_child_weight=3, regressor__reg_lambda=0.5; total time=   3.5s
[CV] END regressor__learning_rate=0.2, regressor__max_depth=8, regressor__min_child_weight=3, regressor__reg_lambda=0.5; total time=   3.6s
[CV] END regressor__learning_rate=0.2, regressor__max_depth=8, regressor__min_child_weight=3, regressor__reg_lambda=1; total time=   3.4s
[CV] END regressor__learning_rate=0.2, regressor__max_depth=8, regressor__min_child_weight=3, regressor__reg_lambda=1; total time=   3.8s
[CV] END regressor__learning_rate=0.2, regressor__max_depth=8, regressor__min_child_weight=3, regressor__reg_lambda=1; total time=   3.8s
[CV] END regressor__learning_rate=0.2, regressor__max_depth=8, regressor__min_child_weight=3, regressor__reg_lambda=1; total time=   3.6s
[CV] END regressor__learning_rate=0.2, regressor__max_depth=8, regressor__min_child_weight=3, regressor__reg_lambda=1; total time=   3.7s
[CV] END regressor__learning_rate=0.2, regressor__max_depth=8, regressor__min_child_weight=3, regressor__reg_lambda=2; total time=   3.3s
[CV] END regressor__learning_rate=0.2, regressor__max_depth=8, regressor__min_child_weight=3, regressor__reg_lambda=2; total time=   3.3s
[CV] END regressor__learning_rate=0.2, regressor__max_depth=8, regressor__min_child_weight=3, regressor__reg_lambda=2; total time=   3.5s
[CV] END regressor__learning_rate=0.2, regressor__max_depth=8, regressor__min_child_weight=3, regressor__reg_lambda=2; total time=   3.6s
[CV] END regressor__learning_rate=0.2, regressor__max_depth=8, regressor__min_child_weight=3, regressor__reg_lambda=2; total time=   3.3s
Out[49]:
GridSearchCV(cv=5,
             estimator=Pipeline(steps=[('CT',
                                        ColumnTransformer(remainder='passthrough',
                                                          transformers=[('ohe',
                                                                         OneHotEncoder(handle_unknown='ignore'),
                                                                         ['Category',
                                                                          'Fuel '
                                                                          'type',
                                                                          'Color',
                                                                          'Gear '
                                                                          'box '
                                                                          'type',
                                                                          'Drive '
                                                                          'wheels',
                                                                          'Wheel',
                                                                          'Doors']),
                                                                        ('pt_std',
                                                                         PowerTransformer(),
                                                                         ['Mileage']),
                                                                        ('std',
                                                                         StandardScaler(),
                                                                         ['Airbags']),
                                                                        ('ode',
                                                                         OrdinalEncoder(),
                                                                         ['Leather '...
                                                     max_delta_step=None,
                                                     max_depth=None,
                                                     max_leaves=None,
                                                     min_child_weight=None,
                                                     missing=nan,
                                                     monotone_constraints=None,
                                                     multi_strategy=None,
                                                     n_estimators=None,
                                                     n_jobs=None,
                                                     num_parallel_tree=None, ...))]),
             param_grid={'regressor__learning_rate': [0.1, 0.2],
                         'regressor__max_depth': [5, 8],
                         'regressor__min_child_weight': [2, 3],
                         'regressor__reg_lambda': [0.5, 1, 2]},
             scoring='r2', verbose=2)
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.
Parameters
estimator estimator: estimator object

This is assumed to implement the scikit-learn estimator interface.
Either estimator needs to provide a ``score`` function,
or ``scoring`` must be passed.
Pipeline(step...=None, ...))])
param_grid param_grid: dict or list of dictionaries

Dictionary with parameters names (`str`) as keys and lists of
parameter settings to try as values, or a list of such
dictionaries, in which case the grids spanned by each dictionary
in the list are explored. This enables searching over any sequence
of parameter settings.
{'regressor__learning_rate': [0.1, 0.2], 'regressor__max_depth': [5, 8], 'regressor__min_child_weight': [2, 3], 'regressor__reg_lambda': [0.5, 1, ...]}
scoring scoring: str, callable, list, tuple or dict, default=None

Strategy to evaluate the performance of the cross-validated model on
the test set.

If `scoring` represents a single score, one can use:

- a single string (see :ref:`scoring_string_names`);
- a callable (see :ref:`scoring_callable`) that returns a single value;
- `None`, the `estimator`'s
:ref:`default evaluation criterion ` is used.

If `scoring` represents multiple scores, one can use:

- a list or tuple of unique strings;
- a callable returning a dictionary where the keys are the metric
names and the values are the metric scores;
- a dictionary with metric names as keys and callables as values.

See :ref:`multimetric_grid_search` for an example.
'r2'
n_jobs n_jobs: int, default=None

Number of jobs to run in parallel.
``None`` means 1 unless in a :obj:`joblib.parallel_backend` context.
``-1`` means using all processors. See :term:`Glossary `
for more details.

.. versionchanged:: v0.20
`n_jobs` default changed from 1 to None
None
refit refit: bool, str, or callable, default=True

Refit an estimator using the best found parameters on the whole
dataset.

For multiple metric evaluation, this needs to be a `str` denoting the
scorer that would be used to find the best parameters for refitting
the estimator at the end.

Where there are considerations other than maximum score in
choosing a best estimator, ``refit`` can be set to a function which
returns the selected ``best_index_`` given ``cv_results_``. In that
case, the ``best_estimator_`` and ``best_params_`` will be set
according to the returned ``best_index_`` while the ``best_score_``
attribute will not be available.

The refitted estimator is made available at the ``best_estimator_``
attribute and permits using ``predict`` directly on this
``GridSearchCV`` instance.

Also for multiple metric evaluation, the attributes ``best_index_``,
``best_score_`` and ``best_params_`` will only be available if
``refit`` is set and all of them will be determined w.r.t this specific
scorer.

See ``scoring`` parameter to know more about multiple metric
evaluation.

See :ref:`sphx_glr_auto_examples_model_selection_plot_grid_search_digits.py`
to see how to design a custom selection strategy using a callable
via `refit`.

See :ref:`this example
`
for an example of how to use ``refit=callable`` to balance model
complexity and cross-validated score.

.. versionchanged:: 0.20
Support for callable added.
True
cv cv: int, cross-validation generator or an iterable, default=None

Determines the cross-validation splitting strategy.
Possible inputs for cv are:

- None, to use the default 5-fold cross validation,
- integer, to specify the number of folds in a `(Stratified)KFold`,
- :term:`CV splitter`,
- An iterable yielding (train, test) splits as arrays of indices.

For integer/None inputs, if the estimator is a classifier and ``y`` is
either binary or multiclass, :class:`StratifiedKFold` is used. In all
other cases, :class:`KFold` is used. These splitters are instantiated
with `shuffle=False` so the splits will be the same across calls.

Refer :ref:`User Guide ` for the various
cross-validation strategies that can be used here.

.. versionchanged:: 0.22
``cv`` default value if None changed from 3-fold to 5-fold.
5
verbose verbose: int

Controls the verbosity: the higher, the more messages.

- >1 : the computation time for each fold and parameter candidate is
displayed;
- >2 : the score is also displayed;
- >3 : the fold and candidate parameter indexes are also displayed
together with the starting time of the computation.
2
pre_dispatch pre_dispatch: int, or str, default='2*n_jobs'

Controls the number of jobs that get dispatched during parallel
execution. Reducing this number can be useful to avoid an
explosion of memory consumption when more jobs get dispatched
than CPUs can process. This parameter can be:

- None, in which case all the jobs are immediately created and spawned. Use
this for lightweight and fast-running jobs, to avoid delays due to on-demand
spawning of the jobs
- An int, giving the exact number of total jobs that are spawned
- A str, giving an expression as a function of n_jobs, as in '2*n_jobs'
'2*n_jobs'
error_score error_score: 'raise' or numeric, default=np.nan

Value to assign to the score if an error occurs in estimator fitting.
If set to 'raise', the error is raised. If a numeric value is given,
FitFailedWarning is raised. This parameter does not affect the refit
step, which will always raise the error.
nan
return_train_score return_train_score: bool, default=False

If ``False``, the ``cv_results_`` attribute will not include training
scores.
Computing training scores is used to get insights on how different
parameter settings impact the overfitting/underfitting trade-off.
However computing the scores on the training set can be computationally
expensive and is not strictly required to select the parameters that
yield the best generalization performance.

.. versionadded:: 0.19

.. versionchanged:: 0.21
Default value was changed from ``True`` to ``False``
False
Parameters
transformers transformers: list of tuples

List of (name, transformer, columns) tuples specifying the
transformer objects to be applied to subsets of the data.

name : str
Like in Pipeline and FeatureUnion, this allows the transformer and
its parameters to be set using ``set_params`` and searched in grid
search.
transformer : {'drop', 'passthrough'} or estimator
Estimator must support :term:`fit` and :term:`transform`.
Special-cased strings 'drop' and 'passthrough' are accepted as
well, to indicate to drop the columns or to pass them through
untransformed, respectively.
columns : str, array-like of str, int, array-like of int, array-like of bool, slice or callable
Indexes the data on its second axis. Integers are interpreted as
positional columns, while strings can reference DataFrame columns
by name. A scalar string or int should be used where
``transformer`` expects X to be a 1d array-like (vector),
otherwise a 2d array will be passed to the transformer.
A callable is passed the input data `X` and can return any of the
above. To select multiple columns by name or dtype, you can use
:obj:`make_column_selector`.
[('ohe', ...), ('pt_std', ...), ...]
remainder remainder: {'drop', 'passthrough'} or estimator, default='drop'

By default, only the specified columns in `transformers` are
transformed and combined in the output, and the non-specified
columns are dropped. (default of ``'drop'``).
By specifying ``remainder='passthrough'``, all remaining columns that
were not specified in `transformers`, but present in the data passed
to `fit` will be automatically passed through. This subset of columns
is concatenated with the output of the transformers. For dataframes,
extra columns not seen during `fit` will be excluded from the output
of `transform`.
By setting ``remainder`` to be an estimator, the remaining
non-specified columns will use the ``remainder`` estimator. The
estimator must support :term:`fit` and :term:`transform`.
Note that using this feature requires that the DataFrame columns
input at :term:`fit` and :term:`transform` have identical order.
'passthrough'
sparse_threshold sparse_threshold: float, default=0.3

If the output of the different transformers contains sparse matrices,
these will be stacked as a sparse matrix if the overall density is
lower than this value. Use ``sparse_threshold=0`` to always return
dense. When the transformed output consists of all dense data, the
stacked result will be dense, and this keyword will be ignored.
0.3
n_jobs n_jobs: int, default=None

Number of jobs to run in parallel.
``None`` means 1 unless in a :obj:`joblib.parallel_backend` context.
``-1`` means using all processors. See :term:`Glossary `
for more details.
None
transformer_weights transformer_weights: dict, default=None

Multiplicative weights for features per transformer. The output of the
transformer is multiplied by these weights. Keys are transformer names,
values the weights.
None
verbose verbose: bool, default=False

If True, the time elapsed while fitting each transformer will be
printed as it is completed.
False
verbose_feature_names_out verbose_feature_names_out: bool, str or Callable[[str, str], str], default=True

- If True, :meth:`ColumnTransformer.get_feature_names_out` will prefix
all feature names with the name of the transformer that generated that
feature. It is equivalent to setting
`verbose_feature_names_out="{transformer_name}__{feature_name}"`.
- If False, :meth:`ColumnTransformer.get_feature_names_out` will not
prefix any feature names and will error if feature names are not
unique.
- If ``Callable[[str, str], str]``,
:meth:`ColumnTransformer.get_feature_names_out` will rename all the features
using the name of the transformer. The first argument of the callable is the
transformer name and the second argument is the feature name. The returned
string will be the new feature name.
- If ``str``, it must be a string ready for formatting. The given string will
be formatted using two field names: ``transformer_name`` and ``feature_name``.
e.g. ``"{feature_name}__{transformer_name}"``. See :meth:`str.format` method
from the standard library for more info.

.. versionadded:: 1.0

.. versionchanged:: 1.6
`verbose_feature_names_out` can be a callable or a string to be formatted.
True
force_int_remainder_cols force_int_remainder_cols: bool, default=False

This parameter has no effect.

.. note::
If you do not access the list of columns for the remainder columns
in the `transformers_` fitted attribute, you do not need to set
this parameter.

.. versionadded:: 1.5

.. versionchanged:: 1.7
The default value for `force_int_remainder_cols` will change from
`True` to `False` in version 1.7.

.. deprecated:: 1.7
`force_int_remainder_cols` is deprecated and will be removed in 1.9.
'deprecated'
['Category', 'Fuel type', 'Color', 'Gear box type', 'Drive wheels', 'Wheel', 'Doors']
Parameters
categories categories: 'auto' or a list of array-like, default='auto'

Categories (unique values) per feature:

- 'auto' : Determine categories automatically from the training data.
- list : ``categories[i]`` holds the categories expected in the ith
column. The passed categories should not mix strings and numeric
values within a single feature, and should be sorted in case of
numeric values.

The used categories can be found in the ``categories_`` attribute.

.. versionadded:: 0.20
'auto'
drop drop: {'first', 'if_binary'} or an array-like of shape (n_features,), default=None

Specifies a methodology to use to drop one of the categories per
feature. This is useful in situations where perfectly collinear
features cause problems, such as when feeding the resulting data
into an unregularized linear regression model.

However, dropping one category breaks the symmetry of the original
representation and can therefore induce a bias in downstream models,
for instance for penalized linear classification or regression models.

- None : retain all features (the default).
- 'first' : drop the first category in each feature. If only one
category is present, the feature will be dropped entirely.
- 'if_binary' : drop the first category in each feature with two
categories. Features with 1 or more than 2 categories are
left intact.
- array : ``drop[i]`` is the category in feature ``X[:, i]`` that
should be dropped.

When `max_categories` or `min_frequency` is configured to group
infrequent categories, the dropping behavior is handled after the
grouping.

.. versionadded:: 0.21
The parameter `drop` was added in 0.21.

.. versionchanged:: 0.23
The option `drop='if_binary'` was added in 0.23.

.. versionchanged:: 1.1
Support for dropping infrequent categories.
None
sparse_output sparse_output: bool, default=True

When ``True``, it returns a :class:`scipy.sparse.csr_matrix`,
i.e. a sparse matrix in "Compressed Sparse Row" (CSR) format.

.. versionadded:: 1.2
`sparse` was renamed to `sparse_output`
True
dtype dtype: number type, default=np.float64

Desired dtype of output.
<class 'numpy.float64'>
handle_unknown handle_unknown: {'error', 'ignore', 'infrequent_if_exist', 'warn'}, default='error'

Specifies the way unknown categories are handled during :meth:`transform`.

- 'error' : Raise an error if an unknown category is present during transform.
- 'ignore' : When an unknown category is encountered during
transform, the resulting one-hot encoded columns for this feature
will be all zeros. In the inverse transform, an unknown category
will be denoted as None.
- 'infrequent_if_exist' : When an unknown category is encountered
during transform, the resulting one-hot encoded columns for this
feature will map to the infrequent category if it exists. The
infrequent category will be mapped to the last position in the
encoding. During inverse transform, an unknown category will be
mapped to the category denoted `'infrequent'` if it exists. If the
`'infrequent'` category does not exist, then :meth:`transform` and
:meth:`inverse_transform` will handle an unknown category as with
`handle_unknown='ignore'`. Infrequent categories exist based on
`min_frequency` and `max_categories`. Read more in the
:ref:`User Guide `.
- 'warn' : When an unknown category is encountered during transform
a warning is issued, and the encoding then proceeds as described for
`handle_unknown="infrequent_if_exist"`.

.. versionchanged:: 1.1
`'infrequent_if_exist'` was added to automatically handle unknown
categories and infrequent categories.

.. versionadded:: 1.6
The option `"warn"` was added in 1.6.
'ignore'
min_frequency min_frequency: int or float, default=None

Specifies the minimum frequency below which a category will be
considered infrequent.

- If `int`, categories with a smaller cardinality will be considered
infrequent.

- If `float`, categories with a smaller cardinality than
`min_frequency * n_samples` will be considered infrequent.

.. versionadded:: 1.1
Read more in the :ref:`User Guide `.
None
max_categories max_categories: int, default=None

Specifies an upper limit to the number of output features for each input
feature when considering infrequent categories. If there are infrequent
categories, `max_categories` includes the category representing the
infrequent categories along with the frequent categories. If `None`,
there is no limit to the number of output features.

.. versionadded:: 1.1
Read more in the :ref:`User Guide `.
None
feature_name_combiner feature_name_combiner: "concat" or callable, default="concat"

Callable with signature `def callable(input_feature, category)` that returns a
string. This is used to create feature names to be returned by
:meth:`get_feature_names_out`.

`"concat"` concatenates encoded feature name and category with
`feature + "_" + str(category)`.E.g. feature X with values 1, 6, 7 create
feature names `X_1, X_6, X_7`.

.. versionadded:: 1.3
'concat'
['Mileage']
Parameters
method method: {'yeo-johnson', 'box-cox'}, default='yeo-johnson'

The power transform method. Available methods are:

- 'yeo-johnson' [1]_, works with positive and negative values
- 'box-cox' [2]_, only works with strictly positive values
'yeo-johnson'
standardize standardize: bool, default=True

Set to True to apply zero-mean, unit-variance normalization to the
transformed output.
True
copy copy: bool, default=True

Set to False to perform inplace computation during transformation.
True
['Airbags']
Parameters
copy copy: bool, default=True

If False, try to avoid a copy and do inplace scaling instead.
This is not guaranteed to always work inplace; e.g. if the data is
not a NumPy array or scipy.sparse CSR matrix, a copy may still be
returned.
True
with_mean with_mean: bool, default=True

If True, center the data before scaling.
This does not work (and will raise an exception) when attempted on
sparse matrices, because centering them entails building a dense
matrix which in common use cases is likely to be too large to fit in
memory.
True
with_std with_std: bool, default=True

If True, scale the data to unit variance (or equivalently,
unit standard deviation).
True
['Leather interior']
Parameters
categories categories: 'auto' or a list of array-like, default='auto'

Categories (unique values) per feature:

- 'auto' : Determine categories automatically from the training data.
- list : ``categories[i]`` holds the categories expected in the ith
column. The passed categories should not mix strings and numeric
values, and should be sorted in case of numeric values.

The used categories can be found in the ``categories_`` attribute.
'auto'
dtype dtype: number type, default=np.float64

Desired dtype of output.
<class 'numpy.float64'>
handle_unknown handle_unknown: {'error', 'use_encoded_value'}, default='error'

When set to 'error' an error will be raised in case an unknown
categorical feature is present during transform. When set to
'use_encoded_value', the encoded value of unknown categories will be
set to the value given for the parameter `unknown_value`. In
:meth:`inverse_transform`, an unknown category will be denoted as None.

.. versionadded:: 0.24
'error'
unknown_value unknown_value: int or np.nan, default=None

When the parameter handle_unknown is set to 'use_encoded_value', this
parameter is required and will set the encoded value of unknown
categories. It has to be distinct from the values used to encode any of
the categories in `fit`. If set to np.nan, the `dtype` parameter must
be a float dtype.

.. versionadded:: 0.24
None
encoded_missing_value encoded_missing_value: int or np.nan, default=np.nan

Encoded value of missing categories. If set to `np.nan`, then the `dtype`
parameter must be a float dtype.

.. versionadded:: 1.1
nan
min_frequency min_frequency: int or float, default=None

Specifies the minimum frequency below which a category will be
considered infrequent.

- If `int`, categories with a smaller cardinality will be considered
infrequent.

- If `float`, categories with a smaller cardinality than
`min_frequency * n_samples` will be considered infrequent.

.. versionadded:: 1.3
Read more in the :ref:`User Guide `.
None
max_categories max_categories: int, default=None

Specifies an upper limit to the number of output categories for each input
feature when considering infrequent categories. If there are infrequent
categories, `max_categories` includes the category representing the
infrequent categories along with the frequent categories. If `None`,
there is no limit to the number of output features.

`max_categories` do **not** take into account missing or unknown
categories. Setting `unknown_value` or `encoded_missing_value` to an
integer will increase the number of unique integer codes by one each.
This can result in up to `max_categories + 2` integer codes.

.. versionadded:: 1.3
Read more in the :ref:`User Guide `.
None
['Manufacturer']
Parameters
categories categories: 'auto' or a list of array-like, default='auto'

Categories (unique values) per feature:

- 'auto' : Determine categories automatically from the training data.
- list : ``categories[i]`` holds the categories expected in the ith
column. The passed categories should not mix strings and numeric
values within a single feature, and should be sorted in case of
numeric values.

The used categories can be found in the ``categories_`` attribute.

.. versionadded:: 0.20
'auto'
drop drop: {'first', 'if_binary'} or an array-like of shape (n_features,), default=None

Specifies a methodology to use to drop one of the categories per
feature. This is useful in situations where perfectly collinear
features cause problems, such as when feeding the resulting data
into an unregularized linear regression model.

However, dropping one category breaks the symmetry of the original
representation and can therefore induce a bias in downstream models,
for instance for penalized linear classification or regression models.

- None : retain all features (the default).
- 'first' : drop the first category in each feature. If only one
category is present, the feature will be dropped entirely.
- 'if_binary' : drop the first category in each feature with two
categories. Features with 1 or more than 2 categories are
left intact.
- array : ``drop[i]`` is the category in feature ``X[:, i]`` that
should be dropped.

When `max_categories` or `min_frequency` is configured to group
infrequent categories, the dropping behavior is handled after the
grouping.

.. versionadded:: 0.21
The parameter `drop` was added in 0.21.

.. versionchanged:: 0.23
The option `drop='if_binary'` was added in 0.23.

.. versionchanged:: 1.1
Support for dropping infrequent categories.
None
sparse_output sparse_output: bool, default=True

When ``True``, it returns a :class:`scipy.sparse.csr_matrix`,
i.e. a sparse matrix in "Compressed Sparse Row" (CSR) format.

.. versionadded:: 1.2
`sparse` was renamed to `sparse_output`
False
dtype dtype: number type, default=np.float64

Desired dtype of output.
<class 'numpy.float64'>
handle_unknown handle_unknown: {'error', 'ignore', 'infrequent_if_exist', 'warn'}, default='error'

Specifies the way unknown categories are handled during :meth:`transform`.

- 'error' : Raise an error if an unknown category is present during transform.
- 'ignore' : When an unknown category is encountered during
transform, the resulting one-hot encoded columns for this feature
will be all zeros. In the inverse transform, an unknown category
will be denoted as None.
- 'infrequent_if_exist' : When an unknown category is encountered
during transform, the resulting one-hot encoded columns for this
feature will map to the infrequent category if it exists. The
infrequent category will be mapped to the last position in the
encoding. During inverse transform, an unknown category will be
mapped to the category denoted `'infrequent'` if it exists. If the
`'infrequent'` category does not exist, then :meth:`transform` and
:meth:`inverse_transform` will handle an unknown category as with
`handle_unknown='ignore'`. Infrequent categories exist based on
`min_frequency` and `max_categories`. Read more in the
:ref:`User Guide `.
- 'warn' : When an unknown category is encountered during transform
a warning is issued, and the encoding then proceeds as described for
`handle_unknown="infrequent_if_exist"`.

.. versionchanged:: 1.1
`'infrequent_if_exist'` was added to automatically handle unknown
categories and infrequent categories.

.. versionadded:: 1.6
The option `"warn"` was added in 1.6.
'infrequent_if_exist'
min_frequency min_frequency: int or float, default=None

Specifies the minimum frequency below which a category will be
considered infrequent.

- If `int`, categories with a smaller cardinality will be considered
infrequent.

- If `float`, categories with a smaller cardinality than
`min_frequency * n_samples` will be considered infrequent.

.. versionadded:: 1.1
Read more in the :ref:`User Guide `.
0.0025
max_categories max_categories: int, default=None

Specifies an upper limit to the number of output features for each input
feature when considering infrequent categories. If there are infrequent
categories, `max_categories` includes the category representing the
infrequent categories along with the frequent categories. If `None`,
there is no limit to the number of output features.

.. versionadded:: 1.1
Read more in the :ref:`User Guide `.
None
feature_name_combiner feature_name_combiner: "concat" or callable, default="concat"

Callable with signature `def callable(input_feature, category)` that returns a
string. This is used to create feature names to be returned by
:meth:`get_feature_names_out`.

`"concat"` concatenates encoded feature name and category with
`feature + "_" + str(category)`.E.g. feature X with values 1, 6, 7 create
feature names `X_1, X_6, X_7`.

.. versionadded:: 1.3
'concat'
['Levy', 'Prod. year', 'Cylinders', 'Engine volume']
Parameters
copy copy: bool, default=True

If False, try to avoid a copy and do inplace scaling instead.
This is not guaranteed to always work inplace; e.g. if the data is
not a NumPy array or scipy.sparse CSR matrix, a copy may still be
returned.
True
with_mean with_mean: bool, default=True

If True, center the data before scaling.
This does not work (and will raise an exception) when attempted on
sparse matrices, because centering them entails building a dense
matrix which in common use cases is likely to be too large to fit in
memory.
True
with_std with_std: bool, default=True

If True, scale the data to unit variance (or equivalently,
unit standard deviation).
True
Parameters
missing_values missing_values: int, float, str, np.nan or None, default=np.nan

The placeholder for the missing values. All occurrences of
`missing_values` will be imputed. For pandas' dataframes with
nullable integer dtypes with missing values, `missing_values`
should be set to np.nan, since `pd.NA` will be converted to np.nan.
nan
n_neighbors n_neighbors: int, default=5

Number of neighboring samples to use for imputation.
5
weights weights: {'uniform', 'distance'} or callable, default='uniform'

Weight function used in prediction. Possible values:

- 'uniform' : uniform weights. All points in each neighborhood are
weighted equally.
- 'distance' : weight points by the inverse of their distance.
in this case, closer neighbors of a query point will have a
greater influence than neighbors which are further away.
- callable : a user-defined function which accepts an
array of distances, and returns an array of the same shape
containing the weights.
'distance'
metric metric: {'nan_euclidean'} or callable, default='nan_euclidean'

Distance metric for searching neighbors. Possible values:

- 'nan_euclidean'
- callable : a user-defined function which conforms to the definition
of ``func_metric(x, y, *, missing_values=np.nan)``. `x` and `y`
corresponds to a row (i.e. 1-D arrays) of `X` and `Y`, respectively.
The callable should returns a scalar distance value.
'nan_euclidean'
copy copy: bool, default=True

If True, a copy of X will be created. If False, imputation will
be done in-place whenever possible.
True
add_indicator add_indicator: bool, default=False

If True, a :class:`MissingIndicator` transform will stack onto the
output of the imputer's transform. This allows a predictive estimator
to account for missingness despite imputation. If a feature has no
missing values at fit/train time, the feature won't appear on the
missing indicator even if there are missing values at transform/test
time.
False
keep_empty_features keep_empty_features: bool, default=False

If True, features that consist exclusively of missing values when
`fit` is called are returned in results when `transform` is called.
The imputed value is always `0`.

.. versionadded:: 1.2
False
['Age', 'Mileage_ratio']
passthrough
Parameters
objective objective: typing.Union[str, xgboost.sklearn._SklObjWProto, typing.Callable[[typing.Any, typing.Any], typing.Tuple[numpy.ndarray, numpy.ndarray]], NoneType]

Specify the learning task and the corresponding learning objective or a custom
objective function to be used.

For custom objective, see :doc:`/tutorials/custom_metric_obj` and
:ref:`custom-obj-metric` for more information, along with the end note for
function signatures.
'reg:squarederror'
base_score base_score: typing.Union[float, typing.List[float], NoneType]

The initial prediction score of all instances, global bias.
None
booster None
callbacks callbacks: typing.Optional[typing.List[xgboost.callback.TrainingCallback]]

List of callback functions that are applied at end of each iteration.
It is possible to use predefined callbacks by using
:ref:`Callback API `.

.. note::

States in callback are not preserved during training, which means callback
objects can not be reused for multiple training sessions without
reinitialization or deepcopy.

.. code-block:: python

for params in parameters_grid:
# be sure to (re)initialize the callbacks before each run
callbacks = [xgb.callback.LearningRateScheduler(custom_rates)]
reg = xgboost.XGBRegressor(**params, callbacks=callbacks)
reg.fit(X, y)
None
colsample_bylevel colsample_bylevel: typing.Optional[float]

Subsample ratio of columns for each level.
None
colsample_bynode colsample_bynode: typing.Optional[float]

Subsample ratio of columns for each split.
None
colsample_bytree colsample_bytree: typing.Optional[float]

Subsample ratio of columns when constructing each tree.
None
device device: typing.Optional[str]

.. versionadded:: 2.0.0

Device ordinal, available options are `cpu`, `cuda`, and `gpu`.
None
early_stopping_rounds early_stopping_rounds: typing.Optional[int]

.. versionadded:: 1.6.0

- Activates early stopping. Validation metric needs to improve at least once in
every **early_stopping_rounds** round(s) to continue training. Requires at
least one item in **eval_set** in :py:meth:`fit`.

- If early stopping occurs, the model will have two additional attributes:
:py:attr:`best_score` and :py:attr:`best_iteration`. These are used by the
:py:meth:`predict` and :py:meth:`apply` methods to determine the optimal
number of trees during inference. If users want to access the full model
(including trees built after early stopping), they can specify the
`iteration_range` in these inference methods. In addition, other utilities
like model plotting can also use the entire model.

- If you prefer to discard the trees after `best_iteration`, consider using the
callback function :py:class:`xgboost.callback.EarlyStopping`.

- If there's more than one item in **eval_set**, the last entry will be used for
early stopping. If there's more than one metric in **eval_metric**, the last
metric will be used for early stopping.
None
enable_categorical enable_categorical: bool

See the same parameter of :py:class:`DMatrix` for details.
False
eval_metric eval_metric: typing.Union[str, typing.List[typing.Union[str, typing.Callable]], typing.Callable, NoneType]

.. versionadded:: 1.6.0

Metric used for monitoring the training result and early stopping. It can be a
string or list of strings as names of predefined metric in XGBoost (See
:doc:`/parameter`), one of the metrics in :py:mod:`sklearn.metrics`, or any
other user defined metric that looks like `sklearn.metrics`.

If custom objective is also provided, then custom metric should implement the
corresponding reverse link function.

Unlike the `scoring` parameter commonly used in scikit-learn, when a callable
object is provided, it's assumed to be a cost function and by default XGBoost
will minimize the result during early stopping.

For advanced usage on Early stopping like directly choosing to maximize instead
of minimize, see :py:obj:`xgboost.callback.EarlyStopping`.

See :doc:`/tutorials/custom_metric_obj` and :ref:`custom-obj-metric` for more
information.

.. code-block:: python

from sklearn.datasets import load_diabetes
from sklearn.metrics import mean_absolute_error
X, y = load_diabetes(return_X_y=True)
reg = xgb.XGBRegressor(
tree_method="hist",
eval_metric=mean_absolute_error,
)
reg.fit(X, y, eval_set=[(X, y)])
None
feature_types feature_types: typing.Optional[typing.Sequence[str]]

.. versionadded:: 1.7.0

Used for specifying feature types without constructing a dataframe. See
the :py:class:`DMatrix` for details.
None
feature_weights feature_weights: Optional[ArrayLike]

Weight for each feature, defines the probability of each feature being selected
when colsample is being used. All values must be greater than 0, otherwise a
`ValueError` is thrown.
None
gamma gamma: typing.Optional[float]

(min_split_loss) Minimum loss reduction required to make a further partition on
a leaf node of the tree.
None
grow_policy grow_policy: typing.Optional[str]

Tree growing policy.

- depthwise: Favors splitting at nodes closest to the node,
- lossguide: Favors splitting at nodes with highest loss change.
None
importance_type None
interaction_constraints interaction_constraints: typing.Union[str, typing.List[typing.Tuple[str]], NoneType]

Constraints for interaction representing permitted interactions. The
constraints must be specified in the form of a nested list, e.g. ``[[0, 1], [2,
3, 4]]``, where each inner list is a group of indices of features that are
allowed to interact with each other. See :doc:`tutorial
` for more information
None
learning_rate learning_rate: typing.Optional[float]

Boosting learning rate (xgb's "eta")
0.2
max_bin max_bin: typing.Optional[int]

If using histogram-based algorithm, maximum number of bins per feature
None
max_cat_threshold max_cat_threshold: typing.Optional[int]

.. versionadded:: 1.7.0

.. note:: This parameter is experimental

Maximum number of categories considered for each split. Used only by
partition-based splits for preventing over-fitting. Also, `enable_categorical`
needs to be set to have categorical feature support. See :doc:`Categorical Data
` and :ref:`cat-param` for details.
None
max_cat_to_onehot max_cat_to_onehot: Optional[int]

.. versionadded:: 1.6.0

.. note:: This parameter is experimental

A threshold for deciding whether XGBoost should use one-hot encoding based split
for categorical data. When number of categories is lesser than the threshold
then one-hot encoding is chosen, otherwise the categories will be partitioned
into children nodes. Also, `enable_categorical` needs to be set to have
categorical feature support. See :doc:`Categorical Data
` and :ref:`cat-param` for details.
None
max_delta_step max_delta_step: typing.Optional[float]

Maximum delta step we allow each tree's weight estimation to be.
None
max_depth max_depth: typing.Optional[int]

Maximum tree depth for base learners.
8
max_leaves max_leaves: typing.Optional[int]

Maximum number of leaves; 0 indicates no limit.
None
min_child_weight min_child_weight: typing.Optional[float]

Minimum sum of instance weight(hessian) needed in a child.
3
missing missing: float

Value in the data which needs to be present as a missing value. Default to
:py:data:`numpy.nan`.
nan
monotone_constraints monotone_constraints: typing.Union[typing.Dict[str, int], str, NoneType]

Constraint of variable monotonicity. See :doc:`tutorial `
for more information.
None
multi_strategy multi_strategy: typing.Optional[str]

.. versionadded:: 2.0.0

.. note:: This parameter is working-in-progress.

The strategy used for training multi-target models, including multi-target
regression and multi-class classification. See :doc:`/tutorials/multioutput` for
more information.

- ``one_output_per_tree``: One model for each target.
- ``multi_output_tree``: Use multi-target trees.
None
n_estimators n_estimators: typing.Optional[int]

Number of gradient boosted trees. Equivalent to number of boosting
rounds.
None
n_jobs n_jobs: typing.Optional[int]

Number of parallel threads used to run xgboost. When used with other
Scikit-Learn algorithms like grid search, you may choose which algorithm to
parallelize and balance the threads. Creating thread contention will
significantly slow down both algorithms.
None
num_parallel_tree None
random_state random_state: typing.Union[numpy.random.mtrand.RandomState, numpy.random._generator.Generator, int, NoneType]

Random number seed.

.. note::

Using gblinear booster with shotgun updater is nondeterministic as
it uses Hogwild algorithm.
None
reg_alpha reg_alpha: typing.Optional[float]

L1 regularization term on weights (xgb's alpha).
None
reg_lambda reg_lambda: typing.Optional[float]

L2 regularization term on weights (xgb's lambda).
0.5
sampling_method sampling_method: typing.Optional[str]

Sampling method. Used only by the GPU version of ``hist`` tree method.

- ``uniform``: Select random training instances uniformly.
- ``gradient_based``: Select random training instances with higher probability
when the gradient and hessian are larger. (cf. CatBoost)
None
scale_pos_weight scale_pos_weight: typing.Optional[float]

Balancing of positive and negative weights.
None
subsample subsample: typing.Optional[float]

Subsample ratio of the training instance.
None
tree_method tree_method: typing.Optional[str]

Specify which tree method to use. Default to auto. If this parameter is set to
default, XGBoost will choose the most conservative option available. It's
recommended to study this option from the parameters document :doc:`tree method
`
None
validate_parameters validate_parameters: typing.Optional[bool]

Give warnings for unknown parameter.
None
verbosity verbosity: typing.Optional[int]

The degree of verbosity. Valid values are 0 (silent) - 3 (debug).
None
In [50]:
Y_pred=gridsearchXG.predict(X_test)
rmse = root_mean_squared_error(Y_test, Y_pred)
print('RMSE:', rmse)

mse = mean_squared_error(Y_test, Y_pred)
print('MSE:', mse)

r2 = r2_score(Y_test, Y_pred)
print('R2:', r2)

mae = mean_absolute_error(Y_test, Y_pred)
print('MAE:', mae)
RMSE: 7057.88818359375
MSE: 49813784.0
R2: 0.7449948787689209
MAE: 4349.7158203125
In [51]:
Y_pred=gridsearchXG.predict(X_train)
rmse = root_mean_squared_error(Y_train, Y_pred)
print('RMSE:', rmse)

mse = mean_squared_error(Y_train, Y_pred)
print('MSE:', mse)

r2 = r2_score(Y_train, Y_pred)
print('R2:', r2)

mae = mean_absolute_error(Y_train, Y_pred)
print('MAE:', mae)
RMSE: 4004.484375
MSE: 16035895.0
R2: 0.9183295369148254
MAE: 2596.77734375

AT LAST, WE ARE CONCLUDING THE RANDOM FOREST REGRESSOR TO BE THE BEST MODEL TILL NOW FOR CAR PRICE PREDICTION OVER THIS DATASET. ALSO WE HAVE ACHIEVED OUR GOAL OVER METRICS THAT R2 SCORE IS 0.755 (>0.75) AND MAE AS 4037 (<5000).

THINGS I CAN DO IN FUTURE ON THIS PROJECT¶

Any replacement of outlier handling method

Try some other models too which I learn in future.

Do feature engineering if I get anything.

Deploy the model in any website form in future.

THANK YOU FOR VISITING THIS PROJECT.¶

I would love to have suggestions from you. You can mail me to amanray8900@gmail.com